Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Min Han, Meiling Xu
Echo state network is a novel kind of recurrent neural networks, with a trainable linear readout layer and a large fixed recurrent connected hidden layer, which can be used to map the rich dynamics of complex real-world data sets. It has been extensively studied in time series prediction. However, there may be an ill-posed problem caused by the number of real-world training samples less than the size of the hidden layer. In this brief, a Laplacian echo state network (LAESN), is proposed to overcome the ill-posed problem and obtain low-dimensional output weights...
January 2018: IEEE Transactions on Neural Networks and Learning Systems
Zhengcai Cao, Qing Xiao, Ran Huang, Mengchu Zhou
In this paper, the problem of path following for underactuated snake robots is investigated by using approximate dynamic programming and neural networks (NNs). The lateral undulatory gait of a snake robot is stabilized in a virtual holonomic constraint manifold through a partial feedback linearizing control law. Based on a dynamic compensator and Line-of-Sight guidance law, the path-following problem is transformed to a regulation problem of a nonlinear system with uncertainties. Subsequently, it is solved by an infinite horizon optimal control scheme using a single critic NN...
January 2018: IEEE Transactions on Neural Networks and Learning Systems
Yu-Jun Zheng, Wei-Guo Sheng, Xing-Ming Sun, Sheng-Yong Chen
Passenger profiling plays a vital part of commercial aviation security, but classical methods become very inefficient in handling the rapidly increasing amounts of electronic records. This paper proposes a deep learning approach to passenger profiling. The center of our approach is a Pythagorean fuzzy deep Boltzmann machine (PFDBM), whose parameters are expressed by Pythagorean fuzzy numbers such that each neuron can learn how a feature affects the production of the correct output from both the positive and negative sides...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Xiaobing Pei, Chuanbo Chen, Yue Guan
In this paper, we propose a novel graph-based semisupervised learning framework, called joint sparse representation and embedding propagation learning (JSREPL). The idea of JSREPL is to join EPL with sparse representation to perform label propagation. Like most of graph-based semisupervised propagation learning algorithms, JSREPL also constructs weights graph matrix from given data. Different from classical approaches which build weights graph matrix and estimate the labels of unlabeled data in sequence, JSREPL simultaneously builds weights graph matrix and estimates the labels of unlabeled data...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Lorenzo Livi, Cesare Alippi
One-class classifiers offer valuable tools to assess the presence of outliers in data. In this paper, we propose a design methodology for one-class classifiers based on entropic spanning graphs. Our approach also takes into account the possibility to process nonnumeric data by means of an embedding procedure. The spanning graph is learned on the embedded input data, and the outcoming partition of vertices defines the classifier. The final partition is derived by exploiting a criterion based on mutual information minimization...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Liangli Zhen, Dezhong Peng, Zhang Yi, Yong Xiang, Peng Chen
In an underdetermined mixture system with unknown sources, it is a challenging task to separate these sources from their observed mixture signals, where . By exploiting the technique of sparse coding, we propose an effective approach to discover some 1-D subspaces from the set consisting of all the time-frequency (TF) representation vectors of observed mixture signals. We show that these 1-D subspaces are associated with TF points where only single source possesses dominant energy. By grouping the vectors in these subspaces via hierarchical clustering algorithm, we obtain the estimation of the mixing matrix...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Rahul Kumar Sevakula, Nishchal Kumar Verma
Classification algorithms have been traditionally designed to simultaneously reduce errors caused by bias as well by variance. However, there occur many situations where low generalization error becomes extremely crucial to getting tangible classification solutions, and even slight overfitting causes serious consequences in the test results. In such situations, classifiers with low Vapnik-Chervonenkis (VC) dimension can bring out positive differences due to two main advantages: 1) the classifier manages to keep the test error close to training error and 2) the classifier learns effectively with small number of samples...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Shibing Zhou, Zhenyuan Xu, Fei Liu
It is crucial to determine the optimal number of clusters for the clustering quality in cluster analysis. From the standpoint of sample geometry, two concepts, i.e., the sample clustering dispersion degree and the sample clustering synthesis degree, are defined, and a new clustering validity index is designed. Moreover, a method for determining the optimal number of clusters based on an agglomerative hierarchical clustering (AHC) algorithm is proposed. The new index and the method can evaluate the clustering results produced by the AHC and determine the optimal number of clusters for multiple types of datasets, such as linear, manifold, annular, and convex structures...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Reddi Kamesh, Kalipatnapu Yamuna Rani
In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Ali Heydari
Adaptive optimal control using value iteration initiated from a stabilizing control policy is theoretically analyzed. The analysis is in terms of stability of the system during the learning stage and includes the system controlled by any fixed control policy and also by an evolving policy. A feature of the presented results is finding subsets of the region of attraction. This is done so that if the initial condition belongs to this region, the entire state trajectory remains within the training region. Therefore, the function approximation results remain reliable, as no extrapolation will be conducted...
October 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Mojtaba Nayyeri, Hadi Sadoghi Yazdi, Alaleh Maskooki, Modjtaba Rouhani
Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space...
October 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Johan Bjurgert, Patricio E Valenzuela, Cristian R Rojas
In the field of machine learning, the algorithm Adaptive Boosting has been successfully applied to a wide range of regression and classification problems. However, to the best of the authors' knowledge, the use of this algorithm to estimate dynamical systems has not been exploited. In this brief, we explore the connection between Adaptive Boosting and system identification, and give examples of an identification method that makes use of this connection. We prove that the resulting estimate converges to the true underlying system for an output-error model structure under reasonable assumptions in the large sample limit and derive a bound of the model mismatch for the noise-free case...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Arash Gharehbaghi, Maria Linden
This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yunlong Yu, Zhong Ji, Jichang Guo, Yanwei Pang
Zero-shot learning (ZSL) endows the computer vision system with the inferential capability to recognize new categories that have never seen before. Two fundamental challenges in it are visual-semantic embedding and domain adaptation in cross-modality learning and unseen class prediction steps, respectively. This paper presents two corresponding methods named Adaptive STructural Embedding (ASTE) and Self-PAced Selective Strategy (SPASS) for both challenges. Specifically, ASTE formulates the visual-semantic interactions in a latent structural support vector machine framework by adaptively adjusting the slack variables to embody different reliablenesses among training instances...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Sihan Xiong, Yiwei Fu, Asok Ray
This paper proposes a Bayesian nonparametric regression model of panel data for sequential pattern classification. The proposed method provides a flexible and parsimonious model that allows both time-independent spatial variables and time-dependent exogenous variables to be predictors. Not only this method improves the accuracy of parameter estimation for limited data, but also it facilitates model interpretation by identifying statistically significant predictors with hypothesis testing. Moreover, as the data length approaches infinity, posterior consistency of the model is guaranteed for general data-generating processes under regular conditions...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Gregory Ditzler, Joseph LaBarck, James Ritchie, Gail Rosen, Robi Polikar
Feature subset selection can be used to sieve through large volumes of data and discover the most informative subset of variables for a particular learning problem. Yet, due to memory and other resource constraints (e.g., CPU availability), many of the state-of-the-art feature subset selection methods cannot be extended to high dimensional data, or data sets with an extremely large volume of instances. In this brief, we extend online feature selection (OFS), a recently introduced approach that uses partial feature information, by developing an ensemble of online linear models to make predictions...
October 11, 2017: IEEE Transactions on Neural Networks and Learning Systems
Jiabei Zeng, Yang Liu, Biao Leng, Zhang Xiong, Yiu-Ming Cheung
Supervised dimensionality reduction (DR) plays an important role in learning systems with high-dimensional data. It projects the data into a low-dimensional subspace and keeps the projected data distinguishable in different classes. In addition to preserving the discriminant information for binary or multiple classes, some real-world applications also require keeping the preference degrees of assigning the data to multiple aspects, e.g., to keep the different intensities for co-occurring facial expressions or the product ratings in different aspects...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Josey Mathew, Chee Khiang Pang, Ming Luo, Weng Hoe Leong
Historical data sets for fault stage diagnosis in industrial machines are often imbalanced and consist of multiple categories or classes. Learning discriminative models from such data sets is challenging due to the lack of representative data and the bias of traditional classifiers toward the majority class. Sampling methods like synthetic minority oversampling technique (SMOTE) have been traditionally used for such problems to artificially balance the data set before being trained by a classifier. This paper proposes a weighted kernel-based SMOTE (WK-SMOTE) that overcomes the limitation of SMOTE for nonlinear problems by oversampling in the feature space of support vector machine (SVM) classifier...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Chao Shi, Zongcheng Liu, Xinmin Dong, Yong Chen
A novel tracking error-compensation-based adaptive neural control scheme is proposed for a class of high-order nonlinear systems with completely unknown nonlinearities and input delay. In the tracking errors of existing papers, there exist the following difficulties: first, output curve always lags behind the desired trajectory, second, some big peak errors cause a decrease in tracking precision, and third, a big initial value of the modified tracking error can make the closed-loop system unstable. To tackle them, three corresponding error-compensation terms are constructed, including a prediction and compensation term, an auxiliary signal produced by the constructed auxiliary system, and a damping term...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yuguang Yan, Qingyao Wu, Mingkui Tan, Michael K Ng, Huaqing Min, Ivor W Tsang
In this paper, we study the online heterogeneous transfer (OHT) learning problem, where the target data of interest arrive in an online manner, while the source data and auxiliary co-occurrence data are from offline sources and can be easily annotated. OHT is very challenging, since the feature spaces of the source and target domains are different. To address this, we propose a novel technique called OHT by hedge ensemble by exploiting both offline knowledge and online knowledge of different domains. To this end, we build an offline decision function based on a heterogeneous similarity that is constructed using labeled source data and unlabeled auxiliary co-occurrence data...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"