Read by QxMD icon Read

Neural Computation

Hideitsu Hino, Jun Fujiki, Shotaro Akaho, Noboru Murata
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method...
April 14, 2017: Neural Computation
Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang
This letter aims at providing a refined error analysis for binary classification using support vector machine (SVM) with gaussian kernel and convex loss. Our first result shows that for some loss functions, such as logistic loss and exponential loss, SVM with gaussian kernel can reach almost the optimal learning rate, provided that the regression function is smooth. Our second result shows that for a large number of loss functions, under Tsybakov noise assumption, if the regression function is infinitely smooth, then SVM with gaussian kernel can achieve a learning rate of order [Formula: see text], where [Formula: see text] is the number of samples...
April 14, 2017: Neural Computation
Tieliang Gong, Zongben Xu, Hong Chen
Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers...
April 14, 2017: Neural Computation
Onder Aydemir
There are various kinds of brain monitoring techniques, including local field potential, near-infrared spectroscopy, magnetic resonance imaging (MRI), positron emission tomography, functional MRI, electroencephalography (EEG), and magnetoencephalography. Among those techniques, EEG is the most widely used one due to its portability, low setup cost, and noninvasiveness. Apart from other advantages, EEG signals also help to evaluate the ability of the smelling organ. In such studies, EEG signals, which are recorded during smelling, are analyzed to determine the subject lacks any smelling ability or to measure the response of the brain...
April 14, 2017: Neural Computation
Takashi Kanamaru
We propose a pulse neural network that exhibits chaotic pattern alternations among stored patterns as a model of multistable perception, which is reflected in phenomena such as binocular rivalry and perceptual ambiguity. When we regard the mixed state of patterns as a part of each pattern, the durations of the retrieved pattern obey unimodal distributions. We confirmed that no chaotic properties are observed in the time series of durations, consistent with the findings of previous psychological studies. Moreover, it is shown that our model also reproduces two properties of multistable perception that characterize the relationship between the contrast of inputs and the durations...
April 14, 2017: Neural Computation
Asieh Abolpour Mofrad, Matthew G Parker
Clique-based neural associative memories introduced by Gripon and Berrou (GB), have been shown to have good performance, and in our previous work we improved the learning capacity and retrieval rate by local coding and precoding in the presence of partial erasures. We now take a step forward and consider nested-clique graph structures for the network. The GB model stores patterns as small cliques, and we here replace these by nested cliques. Simulation results show that the nested-clique structure enhances the clique-based model...
April 14, 2017: Neural Computation
Takashi Uehara, Matteo Sartori, Toshihisa Tanaka, Simone Fiori
The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI...
April 14, 2017: Neural Computation
Mehrdad Salmasi, Martin Stemmler, Stefan Glasauer, Alex Loebel
Synapses are the communication channels for information transfer between neurons; these are the points at which pulse-like signals are converted into the stochastic release of quantized amounts of chemical neurotransmitter. At many synapses, prior neuronal activity depletes synaptic resources, depressing subsequent responses of both spontaneous and spike-evoked releases. We analytically compute the information transmission rate of a synaptic release site, which we model as a binary asymmetric channel. Short-term depression is incorporated by assigning the channel a memory of depth one...
April 14, 2017: Neural Computation
D J Strouse, David J Schwab
Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek (1999) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwing away as many bits as possible and selectively keeping those that are most important. In the IB, compression is measured by mutual information. Here, we introduce an alternative formulation that replaces mutual information with entropy which, we call the deterministic information bottleneck (DIB) and we argue, better captures this notion of compression...
April 14, 2017: Neural Computation
Dengchao He, Hongjun Zhang, Wenning Hao, Rui Zhang, Kai Cheng
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network...
April 14, 2017: Neural Computation
Ning Li, Jinde Cao, Ahmed Alsaedi, Fuad Alsaadi
This letter focuses on lag synchronization control analysis for memristor-based coupled neural networks with parameter mismatches. Due to the parameter mismatches, lag complete synchronization in general cannot be achieved. First, based on the [Formula: see text]-measure method, generalized Halanay inequality, together with control algorithms, some sufficient conditions are obtained to ensure that coupled memristor-based neural networks are in a state of lag synchronization with an error. Moreover, the error level is estimated...
April 14, 2017: Neural Computation
Adrian E Radillo, Alan Veliz-Cuba, Krešimir Josić, Zachary P Kilpatrick
In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible change point counts. This computation can be challenging, as the number of possibilities grows rapidly with time...
March 23, 2017: Neural Computation
Terry Elliott
Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of synaptic plasticity. Integrate-and-express, filter-based models of synaptic plasticity propose that synapses act as low-pass filters, integrating plasticity induction signals before expressing synaptic plasticity...
March 23, 2017: Neural Computation
Vitaly L Galinsky, Lawrence R Frank
A primary goal of many neuroimaging studies that use magnetic resonance imaging (MRI) is to deduce the structure-function relationships in the human brain using data from the three major neuro-MRI modalities: high-resolution anatomical, diffusion tensor imaging, and functional MRI. To date, the general procedure for analyzing these data is to combine the results derived independently from each of these modalities. In this article, we develop a new theoretical and computational approach for combining these different MRI modalities into a powerful and versatile framework that combines our recently developed methods for morphological shape analysis and segmentation, simultaneous local diffusion estimation and global tractography, and nonlinear and nongaussian spatial-temporal activation pattern classification and ranking, as well as our fast and accurate approach for nonlinear registration between modalities...
March 23, 2017: Neural Computation
Shuhei Fujiwara, Akiko Takeda, Takafumi Kanamori
Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM...
May 2017: Neural Computation
Marcelo Matheus Gauy, Florian Meier, Angelika Steger
The connection density of nearby neurons in the cortex has been observed to be around 0.1, whereas the longer-range connections are present with much sparser density (Kalisman, Silberberg, & Markram, 2005 ). We propose a memory association model that qualitatively explains these empirical observations. The model we consider is a multiassociative, sparse, Willshaw-like model consisting of binary threshold neurons and binary synapses. It uses recurrent synapses for iterative retrieval of stored memories. We quantify the usefulness of recurrent synapses by simulating the model for small network sizes and by doing a precise mathematical analysis for large network sizes...
May 2017: Neural Computation
Yuan Zhao, Il Memming Park
When governed by underlying low-dimensional dynamics, the interdependence of simultaneously recorded populations of neurons can be explained by a small number of shared factors, or a low-dimensional trajectory. Recovering these latent trajectories, particularly from single-trial population recordings, may help us understand the dynamics that drive neural computation. However, due to the biophysical constraints and noise in the spike trains, inferring trajectories from data is a challenging statistical problem in general...
May 2017: Neural Computation
Miho Itoh, Timothée Leleu
Recent experiments have shown that stereotypical spatiotemporal patterns occur during brief packets of spiking activity in the cortex, and it has been suggested that top-down inputs can modulate these patterns according to the context. We propose a simple model that may explain important features of these experimental observations and is analytically tractable. The key mechanism underlying this model is that context-dependent top-down inputs can modulate the effective connection strengths between neurons because of short-term synaptic depression...
May 2017: Neural Computation
Shashanka Ubaru, Yousef Saad, Abd-Krim Seghouane
Many machine learning and data-related applications require the knowledge of approximate ranks of large data matrices at hand. This letter presents two computationally inexpensive techniques to estimate the approximate ranks of such matrices. These techniques exploit approximate spectral densities, popular in physics, which are probability density distributions that measure the likelihood of finding eigenvalues of the matrix at a given point on the real line. Integrating the spectral density over an interval gives the eigenvalue count of the matrix in that interval...
May 2017: Neural Computation
Xiaowei Zhao, Feiping Nie, Sen Wang, Jun Guo, Pengfei Xu, Xiaojiang Chen
In recent years, unsupervised two-dimensional (2D) dimensionality reduction methods for unlabeled large-scale data have made progress. However, performance of these degrades when the learning of similarity matrix is at the beginning of the dimensionality reduction process. A similarity matrix is used to reveal the underlying geometry structure of data in unsupervised dimensionality reduction methods. Because of noise data, it is difficult to learn the optimal similarity matrix. In this letter, we propose a new dimensionality reduction model for 2D image matrices: unsupervised 2D dimensionality reduction with adaptive structure learning (DRASL)...
May 2017: Neural Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"