Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Lei Zhang, David Zhang
Conventional extreme learning machines (ELMs) solve a Moore-Penrose generalized inverse of hidden layer activated matrix and analytically determine the output weights to achieve generalized performance, by assuming the same loss from different types of misclassification. The assumption may not hold in cost-sensitive recognition tasks, such as face recognition-based access control system, where misclassifying a stranger as a family member may result in more serious disaster than misclassifying a family member as a stranger...
October 11, 2016: IEEE Transactions on Neural Networks and Learning Systems
Xiaowei Feng, Xiangyu Kong, Hongguang Ma, Xiaosheng Si
Generalized eigendecomposition problem has been widely employed in many signal processing applications. In this paper, we propose a unified and self-stabilizing algorithm, which is able to extract the first principal and minor generalized eigenvectors of a matrix pencil of two vector sequences adaptively. Furthermore, we extend the proposed algorithm to extract multiple generalized eigenvectors. The performance analysis shows that only the desired equilibrium point of the proposed algorithm is stable and all others are (unstable) repellers or saddle points...
October 10, 2016: IEEE Transactions on Neural Networks and Learning Systems
Gnaneswaran Nagamani, Thirunavukkarasu Radhika, Quanxin Zhu
In this paper, we investigate the dissipativity and passivity of Markovian jump stochastic neural networks involving two additive time-varying delays. Using a Lyapunov-Krasovskii functional with triple and quadruple integral terms, we obtain delay-dependent passivity and dissipativity criteria for the system. Using a generalized Finsler lemma (GFL), a set of slack variables with special structure are introduced to reduce design conservatism. The dissipativity and passivity criteria depend on the upper bounds of the discrete time-varying delay and its derivative are given in terms of linear matrix inequalities, which can be efficiently solved through the standard numerical software...
October 10, 2016: IEEE Transactions on Neural Networks and Learning Systems
Nikola K Kasabov, Maryam Gholami Doborjeh, Zohreh Gholami Doborjeh
This paper introduces a new methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data. The method is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture [1]. The method consists of several steps: mapping spatial coordinates of fMRI data into a 3-D SNN cube (SNNc) that represents a brain template; input data transformation into trains of spikes; deep, unsupervised learning in the 3-D SNNc of spatiotemporal patterns from data; supervised learning in an evolving SNN classifier; parameter optimization; and 3-D visualization and model interpretation...
October 6, 2016: IEEE Transactions on Neural Networks and Learning Systems
Xi Peng, Canyi Lu, Zhang Yi, Huajin Tang
A lot of works have shown that frobenius-norm-based representation (FNR) is competitive to sparse representation and nuclear-norm-based representation (NNR) in numerous tasks such as subspace clustering. Despite the success of FNR in experimental studies, less theoretical analysis is provided to understand its working mechanism. In this brief, we fill this gap by building the theoretical connections between FNR and NNR. More specially, we prove that: 1) when the dictionary can provide enough representative capacity, FNR is exactly NNR even though the data set contains the Gaussian noise, Laplacian noise, or sample-specified corruption and 2) otherwise, FNR and NNR are two solutions on the column space of the dictionary...
October 6, 2016: IEEE Transactions on Neural Networks and Learning Systems
Liangli Zhen, Dezhong Peng, Zhang Yi, Yong Xiang, Peng Chen
In an underdetermined mixture system with n unknown sources, it is a challenging task to separate these sources from their m observed mixture signals, where . By exploiting the technique of sparse coding, we propose an effective approach to discover some 1-D subspaces from the set consisting of all the time-frequency (TF) representation vectors of observed mixture signals. We show that these 1-D subspaces are associated with TF points where only single source possesses dominant energy. By grouping the vectors in these subspaces via hierarchical clustering algorithm, we obtain the estimation of the mixing matrix...
October 5, 2016: IEEE Transactions on Neural Networks and Learning Systems
Shibing Zhou, Zhenyuan Xu, Fei Liu
It is crucial to determine the optimal number of clusters for the clustering quality in cluster analysis. From the standpoint of sample geometry, two concepts, i.e., the sample clustering dispersion degree and the sample clustering synthesis degree, are defined, and a new clustering validity index is designed. Moreover, a method for determining the optimal number of clusters based on an agglomerative hierarchical clustering (AHC) algorithm is proposed. The new index and the method can evaluate the clustering results produced by the AHC and determine the optimal number of clusters for multiple types of datasets, such as linear, manifold, annular, and convex structures...
October 5, 2016: IEEE Transactions on Neural Networks and Learning Systems
Rahul Kumar Sevakula, Nishchal Kumar Verma
Classification algorithms have been traditionally designed to simultaneously reduce errors caused by bias as well by variance. However, there occur many situations where low generalization error becomes extremely crucial to getting tangible classification solutions, and even slight overfitting causes serious consequences in the test results. In such situations, classifiers with low Vapnik-Chervonenkis (VC) dimension can bring out positive differences due to two main advantages: 1) the classifier manages to keep the test error close to training error and 2) the classifier learns effectively with small number of samples...
September 30, 2016: IEEE Transactions on Neural Networks and Learning Systems
Wenrui Hu, Dacheng Tao, Wensheng Zhang, Yuan Xie, Yehui Yang
In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way...
September 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
Yin Sheng, Yi Shen, Mingfu Zhu
This paper deals with the global exponential stability for delayed recurrent neural networks (DRNNs). By constructing an augmented Lyapunov-Krasovskii functional and adopting the reciprocally convex combination approach and Wirtinger-based integral inequality, delay-dependent global exponential stability criteria are derived in terms of linear matrix inequalities. Meanwhile, a general and effective method on global exponential stability analysis for DRNNs is given through a lemma, where the exponential convergence rate can be estimated...
September 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
Xiaobing Pei, Chuanbo Chen, Yue Guan
In this paper, we propose a novel graph-based semisupervised learning framework, called joint sparse representation and embedding propagation learning (JSREPL). The idea of JSREPL is to join EPL with sparse representation to perform label propagation. Like most of graph-based semisupervised propagation learning algorithms, JSREPL also constructs weights graph matrix from given data. Different from classical approaches which build weights graph matrix and estimate the labels of unlabeled data in sequence, JSREPL simultaneously builds weights graph matrix and estimates the labels of unlabeled data...
September 28, 2016: IEEE Transactions on Neural Networks and Learning Systems
Lorenzo Livi, Cesare Alippi
One-class classifiers offer valuable tools to assess the presence of outliers in data. In this paper, we propose a design methodology for one-class classifiers based on entropic spanning graphs. Our approach also takes into account the possibility to process nonnumeric data by means of an embedding procedure. The spanning graph is learned on the embedded input data, and the outcoming partition of vertices defines the classifier. The final partition is derived by exploiting a criterion based on mutual information minimization...
September 28, 2016: IEEE Transactions on Neural Networks and Learning Systems
Alain Rakotomamonjy, Sokol Koco, Liva Ralaivola
Several sparsity-constrained algorithms, such as orthogonal matching pursuit (OMP) or the Frank-Wolfe (FW) algorithm, with sparsity constraints work by iteratively selecting a novel atom to add to the current nonzero set of variables. This selection step is usually performed by computing the gradient and then by looking for the gradient component with maximal absolute entry. This step can be computationally expensive especially for large-scale and high-dimensional data. In this paper, we aim at accelerating these sparsity-constrained optimization algorithms by exploiting the key observation that, for these algorithms to work, one only needs the coordinate of the gradient's top entry...
September 9, 2016: IEEE Transactions on Neural Networks and Learning Systems
Alberto Antonietti, Claudia Casellato, Egidio D'Angelo, Alessandra Pedrocchi
The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established...
September 1, 2016: IEEE Transactions on Neural Networks and Learning Systems
Lu Dong, Xiangnan Zhong, Changyin Sun, Haibo He
In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law...
August 31, 2016: IEEE Transactions on Neural Networks and Learning Systems
Po-Lung Tien
In this paper, we propose a novel discrete-time recurrent neural network aiming to resolve a new class of multi-constrained K-winner-take-all (K-WTA) problems. By facilitating specially designed asymmetric neuron weights, the proposed model is capable of operating in a fully parallel manner, thereby allowing true digital implementation. This paper also provides theorems that delineate the theoretical upper bound of the convergence latency, which is merely O(K). Importantly, via simulations, the average convergence time is close to O(1) in most general cases...
August 26, 2016: IEEE Transactions on Neural Networks and Learning Systems
Piotr Antonik, Francois Duport, Michiel Hermans, Anteo Smerieri, Marc Haelterman, Serge Massar
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. The performance of its analog implementation is comparable to other state-of-the-art algorithms for tasks such as speech recognition or chaotic time series prediction, but these are often constrained by the offline training methods commonly employed. Here, we investigated the online learning approach by training an optoelectronic reservoir computer using a simple gradient descent algorithm, programmed on a field-programmable gate array chip...
August 26, 2016: IEEE Transactions on Neural Networks and Learning Systems
Roberto Fierimonte, Simone Scardapane, Aurelio Uncini, Massimo Panella
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns...
August 26, 2016: IEEE Transactions on Neural Networks and Learning Systems
Wojciech Samek, Alexander Binder, Gregoire Montavon, Sebastian Lapuschkin, Klaus-Robert Muller
Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image...
August 25, 2016: IEEE Transactions on Neural Networks and Learning Systems
Alex Alexandridis, Eva Chondrodima, Nikolaos Giannopoulos, Haralambos Sarimveis
This brief presents a novel learning scheme for categorical data based on radial basis function (RBF) networks. The proposed approach replaces the numerical vectors known as RBF centers with categorical tuple centers, and employs specially designed measures for calculating the distance between the center and the input tuples. Furthermore, a fast noniterative categorical clustering algorithm is proposed to accomplish the first stage of RBF training involving categorical center selection, whereas the weights are calculated through linear regression...
August 24, 2016: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"