Read by QxMD icon Read

Neural Computation

Hongqiao Wang, Jinglai Li
We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)-based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. We then provide an adaptive algorithm to construct such an approximation, where an active learning method is used to choose the design points...
September 14, 2018: Neural Computation
Yingyi Chen, Qianqian Cheng, Yanjun Cheng, Hao Yang, Huihui Yu
Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies...
September 14, 2018: Neural Computation
Andrés Pomi, Eduardo Mizraji, Juan Lin
Human brains seem to represent categories of objects and actions as locations in a continuous semantic space across the cortical surface that reflects the similarity among categories. This vision of the semantic organization of information in the brain, suggested by recent experimental findings, is in harmony with the well-known topographically organized somatotopic, retinotopic, and tonotopic maps in the cerebral cortex. Here we show that these topographies can be operationally represented with context-dependent associative memories...
September 14, 2018: Neural Computation
Rongxin Bao, Xu Yuan, Zhikui Chen, Ruixin Ma
The success of CNNs is accompanied by deep models and heavy storage costs. For compressing CNNs, we propose an efficient and robust pruning approach, cross-entropy pruning (CEP). Given a trained CNN model, connections were divided into groups in a group-wise way according to their corresponding output neurons. All connections with their cross-entropy errors below a grouping threshold were then removed. A sparse model was obtained and the number of parameters in the baseline model significantly reduced. This letter also presents a highest cross-entropy pruning (HCEP) method that keeps a small portion of weights with the highest CEP...
September 14, 2018: Neural Computation
Sander W Keemink, Dharmesh V Tailor, Mark C W van Rossum
Throughout the nervous system, information is commonly coded in activity distributed over populations of neurons. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of the encoded stimulus can be read out without bias. However, in many situations, multiple stimuli are simultaneously present; for example, multiple motion patterns might overlap. Here we find that when multiple stimuli that overlap in their neural representation are simultaneously encoded in the population, biases in the read-out emerge...
September 14, 2018: Neural Computation
David M Brandman, Michael C Burkhart, Jessica Kelemen, Brian Franco, Matthew T Harrison, Leigh R Hochberg
Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities...
September 14, 2018: Neural Computation
Yoram Baram
Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate...
September 14, 2018: Neural Computation
Beatrice Alexandra Golomb
IMPORTANCE: A mystery illness striking U.S. and Canadian diplomats to Cuba (and now China) "has confounded the FBI, the State Department and US intelligence agencies" (Lederman, Weissenstein, & Lee, 2017). Sonic explanations for the so-called health attacks have long dominated media reports, propelled by peculiar sounds heard and auditory symptoms experienced. Sonic mediation was justly rejected by experts. We assessed whether pulsed radiofrequency/microwave radiation (RF/MW) exposure can accommodate reported facts in diplomats, including unusual ones...
September 5, 2018: Neural Computation
Ulisse Ferrari, Stéphane Deny, Olivier Marre, Thierry Mora
Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, that can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters but can reproduce the observed variability in retinal recordings in various conditions...
August 27, 2018: Neural Computation
Tingwei Gao, Yueting Chai
This study focuses on predicting stock closing prices by using recurrent neural networks (RNNs). A long short-term memory (LSTM) model, a type of RNN coupled with stock basic trading data and technical indicators, is introduced as a novel method to predict the closing price of the stock market. We realize dimension reduction for the technical indicators by conducting principal component analysis (PCA). To train the model, some optimization strategies are followed, including adaptive moment estimation (Adam) and Glorot uniform initialization...
August 27, 2018: Neural Computation
Kishan Wimalawarne, Makoto Yamada, Hiroshi Mamitsuka
We propose a set of convex low-rank inducing norms for coupled matrices and tensors (hereafter referred to as coupled tensors), in which information is shared between the matrices and tensors through common modes. More specifically, we first propose a mixture of the overlapped trace norm and the latent norms with the matrix trace norm, and then, propose a completion model regularized using these norms to impute coupled tensors. A key advantage of the proposed norms is that they are convex and can be used to find a globally optimal solution, whereas existing methods for coupled learning are nonconvex...
August 27, 2018: Neural Computation
Nicolai Waniek
Grid cells of the rodent entorhinal cortex are essential for spatial navigation. Although their function is commonly believed to be either path integration or localization, the origin or purpose of their hexagonal firing fields remains disputed. Here they are proposed to arise as an optimal encoding of transitions in sequences. First, storage requirements for transitions in general episodic sequences are examined using propositional logic and graph theory. Subsequently, transitions in complete metric spaces are considered under the assumption of an ideal sampling of an input space...
August 27, 2018: Neural Computation
Michael Rule, Guido Sanguinetti
Modeling and interpreting spike train data is a task of central importance in computational neuroscience, with significant translational implications. Two popular classes of data-driven models for this task are autoregressive point-process generalized linear models (PPGLM) and latent state-space models (SSM) with point-process observations. In this letter, we derive a mathematical connection between these two classes of models. By introducing an auxiliary history process, we represent exactly a PPGLM in terms of a latent, infinite-dimensional dynamical system, which can then be mapped onto an SSM by basis function projections and moment closure...
August 27, 2018: Neural Computation
Vafa Andalibi, Henri Hokkanen, Simo Vanni
Simulation of the cerebral cortex requires a combination of extensive domain-specific knowledge and efficient software. However, when the complexity of the biological system is combined with that of the software, the likelihood of coding errors increases, which slows model adjustments. Moreover, few life scientists are familiar with software engineering and would benefit from simplicity in form of a high-level abstraction of the biological model. Our primary aim was to build a scalable cortical simulation framework for personal computers...
August 27, 2018: Neural Computation
SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D Lee
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom. Conventional data augmentation methods rely on sampling large numbers of training examples from these manifolds. Instead, we propose an iterative algorithm, [Formula: see text], based on a cutting plane approach that efficiently solves a quadratic semi-infinite programming problem to find the maximum margin solution. We provide a proof of convergence as well as a polynomial bound on the number of iterations required for a desired tolerance in the objective function...
August 27, 2018: Neural Computation
Haifeng Zhao, Siqi Wang, Zheng Wang
Least squares regression (LSR) is a fundamental statistical analysis technique that has been widely applied to feature learning. However, limited by its simplicity, the local structure of data is easy to neglect, and many methods have considered using orthogonal constraint for preserving more local information. Another major drawback of LSR is that the loss function between soft regression results and hard target values cannot precisely reflect the classification ability; thus, the idea of the large margin constraint is put forward...
July 18, 2018: Neural Computation
Chang Sub Kim
We formulate the computational processes of perception in the framework of the principle of least action by postulating the theoretical action as a time integral of the variational free energy in the neurosciences. The free-energy principle is accordingly rephrased, on autopoetic grounds, as follows: all viable organisms attempt to minimize their sensory uncertainty about an unpredictable environment over a temporal horizon. By taking the variation of informational action, we derive neural recognition dynamics (RD), which by construction reduces to the Bayesian filtering of external states from noisy sensory inputs...
July 18, 2018: Neural Computation
Stephanie Reynolds, Therese Abrahamsson, P Jesper Sjöström, Simon R Schultz, Pier Luigi Dragotti
In recent years, the development of algorithms to detect neuronal spiking activity from two-photon calcium imaging data has received much attention, yet few researchers have examined the metrics used to assess the similarity of detected spike trains with the ground truth. We highlight the limitations of the two most commonly used metrics, the spike train correlation and success rate, and propose an alternative, which we refer to as CosMIC. Rather than operating on the true and estimated spike trains directly, the proposed metric assesses the similarity of the pulse trains obtained from convolution of the spike trains with a smoothing pulse...
July 18, 2018: Neural Computation
Stephen J Verzi, Fredrick Rothganger, Ojas D Parekh, Tu-Thach Quach, Nadine E Miner, Craig M Vineyard, Conrad D James, James B Aimone
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventional methods, and under what circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage...
July 18, 2018: Neural Computation
Richard M Golden
Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can guide the algorithm development and evaluation process and provide theoretical validation for a particular algorithm design. For many decades, the machine learning community has widely recognized the importance of stochastic approximation theory as a powerful tool for identifying explicit convergence conditions for adaptive learning machines...
July 18, 2018: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"