Read by QxMD icon Read

Neural Computation

Haifeng Zhao, Siqi Wang, Zheng Wang
Least squares regression (LSR) is a fundamental statistical analysis technique that has been widely applied to feature learning. However, limited by its simplicity, the local structure of data is easy to neglect, and many methods have considered using orthogonal constraint for preserving more local information. Another major drawback of LSR is that the loss function between soft regression results and hard target values cannot precisely reflect the classification ability; thus, the idea of the large margin constraint is put forward...
July 18, 2018: Neural Computation
Chang Sub Kim
We formulate the computational processes of perception in the framework of the principle of least action by postulating the theoretical action as a time integral of the variational free energy in the neurosciences. The free-energy principle is accordingly rephrased, on autopoetic grounds, as follows: all viable organisms attempt to minimize their sensory uncertainty about an unpredictable environment over a temporal horizon. By taking the variation of informational action, we derive neural recognition dynamics (RD), which by construction reduces to the Bayesian filtering of external states from noisy sensory inputs...
July 18, 2018: Neural Computation
Stephanie Reynolds, Therese Abrahamsson, P Jesper Sjöström, Simon R Schultz, Pier Luigi Dragotti
In recent years, the development of algorithms to detect neuronal spiking activity from two-photon calcium imaging data has received much attention, yet few researchers have examined the metrics used to assess the similarity of detected spike trains with the ground truth. We highlight the limitations of the two most commonly used metrics, the spike train correlation and success rate, and propose an alternative, which we refer to as CosMIC. Rather than operating on the true and estimated spike trains directly, the proposed metric assesses the similarity of the pulse trains obtained from convolution of the spike trains with a smoothing pulse...
July 18, 2018: Neural Computation
Stephen J Verzi, Fredrick Rothganger, Ojas D Parekh, Tu-Thach Quach, Nadine E Miner, Craig M Vineyard, Conrad D James, James B Aimone
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventional methods, and under what circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage...
July 18, 2018: Neural Computation
M B Milde, O J N Bertrand, H Ramachandran, M Egelhaaf, E Chicca
Apparent motion of the surroundings on an agent's retina can be used to navigate through cluttered environments, avoid collisions with obstacles, or track targets of interest. The pattern of apparent motion of objects, (i.e., the optic flow), contains spatial information about the surrounding environment. For a small, fast-moving agent, as used in search and rescue missions, it is crucial to estimate the distance to close-by objects to avoid collisions quickly. This estimation cannot be done by conventional methods, such as frame-based optic flow estimation, given the size, power, and latency constraints of the necessary hardware...
July 18, 2018: Neural Computation
Qinglong Wang, Kaixuan Zhang, Alexander G Ororbia Ii, Xinya Xing, Xue Liu, C Lee Giles
Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs...
July 18, 2018: Neural Computation
Richard M Golden
Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can guide the algorithm development and evaluation process and provide theoretical validation for a particular algorithm design. For many decades, the machine learning community has widely recognized the importance of stochastic approximation theory as a powerful tool for identifying explicit convergence conditions for adaptive learning machines...
July 18, 2018: Neural Computation
Yuval Harel, Ron Meir, Manfred Opper
Neural decoding may be formulated as dynamic state estimation (filtering) based on point-process observations, a generally intractable problem. Numerical sampling techniques are often practically useful for the decoding of real neural data. However, they are less useful as theoretical tools for modeling and understanding sensory neural systems, since they lead to limited conceptual insight into optimal encoding and decoding strategies. We consider sensory neural populations characterized by a distribution over neuron parameters...
June 27, 2018: Neural Computation
Joaquin Rapela, Marissa Westerfield, Jeanne Townsend
This letter makes scientific and methodological contributions. Scientifically, it demonstrates a new and behaviorally relevant effect of temporal expectation on the phase coherence of the electroencephalogram (EEG). Methodologically, it introduces novel methods to characterize EEG recordings at the single-trial level. Expecting events in time can lead to more efficient behavior. A remarkable finding in the study of temporal expectation is the foreperiod effect on reaction time, that is, the influence on reaction time of the delay between a warning signal and a succeeding imperative stimulus to which subjects are instructed to respond as quickly as possible...
June 27, 2018: Neural Computation
Sarah Schwöbel, Stefan Kiebel, Dimitrije Marković
When modeling goal-directed behavior in the presence of various sources of uncertainty, planning can be described as an inference process. A solution to the problem of planning as inference was previously proposed in the active inference framework in the form of an approximate inference scheme based on variational free energy. However, this approximate scheme was based on the mean-field approximation, which assumes statistical independence of hidden variables and is known to show overconfidence and may converge to local minima of the free energy...
June 27, 2018: Neural Computation
Zhitong Qiao, Yan Han, Xiaoxia Han, Han Xu, Will X Y Li, Dong Song, Theodore W Berger, Ray C C Cheung
A hippocampal prosthesis is a very large scale integration (VLSI) biochip that needs to be implanted in the biological brain to solve a cognitive dysfunction. In this letter, we propose a novel low-complexity, small-area, and low-power programmable hippocampal neural network application-specific integrated circuit (ASIC) for a hippocampal prosthesis. It is based on the nonlinear dynamical model of the hippocampus: namely multi-input, multi-output (MIMO)-generalized Laguerre-Volterra model (GLVM). It can realize the real-time prediction of hippocampal neural activity...
June 27, 2018: Neural Computation
Robert D'Angelo, Richard Wood, Nathan Lowry, Geremy Freifeld, Haiyao Huang, Christopher D Salthouse, Brent Hollosi, Matthew Muresan, Wes Uy, Nhut Tran, Armand Chery, Dorothy C Poppe, Sameer Sonkusale
Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms...
June 27, 2018: Neural Computation
Wenyuan Li, Igor V Ovchinnikov, Honglin Chen, Zhe Wang, Albert Lee, Houchul Lee, Carlos Cepeda, Robert N Schwartz, Karlheinz Meier, Kang L Wang
The extreme complexity of the brain has attracted the attention of neuroscientists and other researchers for a long time. More recently, the neuromorphic hardware has matured to provide a tool to study neuronal dynamics. Here, we study neuronal dynamics using different sets of settings on a neuromorphic chip built with flexible parameters of neuron models. Our unique setting in the network of leaky integrate-and-fire (LIF) neurons is to introduce a weak noise environment. We observed three different types of collective neuronal activities, each with a clear boundary separating the different types of activity (phase transition)...
June 12, 2018: Neural Computation
Thomas Parr, Karl J Friston
To act upon the world, creatures must change continuous variables such as muscle length or chemical concentration. In contrast, decision making is an inherently discrete process, involving the selection among alternative courses of action. In this article, we consider the interface between the discrete and continuous processes that translate our decisions into movement in a Newtonian world-and how movement informs our decisions. We do so by appealing to active inference, with a special focus on the oculomotor system...
June 12, 2018: Neural Computation
Najah F Ghalyan, David J Miller, Asok Ray
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series...
June 12, 2018: Neural Computation
Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke
We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation...
August 2018: Neural Computation
Furui Liu, Laiwan Chan
In this letter, we study the confounder detection problem in the linear model, where the target variable [Formula: see text] is predicted using its [Formula: see text] potential causes [Formula: see text]. Based on an assumption of a rotation-invariant generating process of the model, recent study shows that the spectral measure induced by the regression coefficient vector with respect to the covariance matrix of [Formula: see text] is close to a uniform measure in purely causal cases, but it differs from a uniform measure characteristically in the presence of a scalar confounder...
August 2018: Neural Computation
Michiel Stock, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem Waegeman
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning...
August 2018: Neural Computation
Javier J How, Saket Navlakha
Biological networks have long been known to be modular, containing sets of nodes that are highly connected internally. Less emphasis, however, has been placed on understanding how intermodule connections are distributed within a network. Here, we borrow ideas from engineered circuit design and study Rentian scaling, which states that the number of external connections between nodes in different modules is related to the number of nodes inside the modules by a power-law relationship. We tested this property in a broad class of molecular networks, including protein interaction networks for six species and gene regulatory networks for 41 human and 25 mouse cell types...
August 2018: Neural Computation
Henry D I Abarbanel, Paul J Rozdeba, Sasha Shirman
We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed...
August 2018: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"