Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Min Han, Meiling Xu
Echo state network is a novel kind of recurrent neural networks, with a trainable linear readout layer and a large fixed recurrent connected hidden layer, which can be used to map the rich dynamics of complex real-world data sets. It has been extensively studied in time series prediction. However, there may be an ill-posed problem caused by the number of real-world training samples less than the size of the hidden layer. In this brief, a Laplacian echo state network (LAESN), is proposed to overcome the ill-posed problem and obtain low-dimensional output weights...
January 2018: IEEE Transactions on Neural Networks and Learning Systems
Zhengcai Cao, Qing Xiao, Ran Huang, Mengchu Zhou
In this paper, the problem of path following for underactuated snake robots is investigated by using approximate dynamic programming and neural networks (NNs). The lateral undulatory gait of a snake robot is stabilized in a virtual holonomic constraint manifold through a partial feedback linearizing control law. Based on a dynamic compensator and Line-of-Sight guidance law, the path-following problem is transformed to a regulation problem of a nonlinear system with uncertainties. Subsequently, it is solved by an infinite horizon optimal control scheme using a single critic NN...
January 2018: IEEE Transactions on Neural Networks and Learning Systems
Yu-Jun Zheng, Wei-Guo Sheng, Xing-Ming Sun, Sheng-Yong Chen
Passenger profiling plays a vital part of commercial aviation security, but classical methods become very inefficient in handling the rapidly increasing amounts of electronic records. This paper proposes a deep learning approach to passenger profiling. The center of our approach is a Pythagorean fuzzy deep Boltzmann machine (PFDBM), whose parameters are expressed by Pythagorean fuzzy numbers such that each neuron can learn how a feature affects the production of the correct output from both the positive and negative sides...
December 2017: IEEE Transactions on Neural Networks and Learning Systems
Ali Heydari
Adaptive optimal control using value iteration initiated from a stabilizing control policy is theoretically analyzed. The analysis is in terms of stability of the system during the learning stage and includes the system controlled by any fixed control policy and also by an evolving policy. A feature of the presented results is finding subsets of the region of attraction. This is done so that if the initial condition belongs to this region, the entire state trajectory remains within the training region. Therefore, the function approximation results remain reliable, as no extrapolation will be conducted...
October 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Mojtaba Nayyeri, Hadi Sadoghi Yazdi, Alaleh Maskooki, Modjtaba Rouhani
Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space...
October 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Johan Bjurgert, Patricio E Valenzuela, Cristian R Rojas
In the field of machine learning, the algorithm Adaptive Boosting has been successfully applied to a wide range of regression and classification problems. However, to the best of the authors' knowledge, the use of this algorithm to estimate dynamical systems has not been exploited. In this brief, we explore the connection between Adaptive Boosting and system identification, and give examples of an identification method that makes use of this connection. We prove that the resulting estimate converges to the true underlying system for an output-error model structure under reasonable assumptions in the large sample limit and derive a bound of the model mismatch for the noise-free case...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Arash Gharehbaghi, Maria Linden
This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yunlong Yu, Zhong Ji, Jichang Guo, Yanwei Pang
Zero-shot learning (ZSL) endows the computer vision system with the inferential capability to recognize new categories that have never seen before. Two fundamental challenges in it are visual-semantic embedding and domain adaptation in cross-modality learning and unseen class prediction steps, respectively. This paper presents two corresponding methods named Adaptive STructural Embedding (ASTE) and Self-PAced Selective Strategy (SPASS) for both challenges. Specifically, ASTE formulates the visual-semantic interactions in a latent structural support vector machine framework by adaptively adjusting the slack variables to embody different reliablenesses among training instances...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Sihan Xiong, Yiwei Fu, Asok Ray
This paper proposes a Bayesian nonparametric regression model of panel data for sequential pattern classification. The proposed method provides a flexible and parsimonious model that allows both time-independent spatial variables and time-dependent exogenous variables to be predictors. Not only this method improves the accuracy of parameter estimation for limited data, but also it facilitates model interpretation by identifying statistically significant predictors with hypothesis testing. Moreover, as the data length approaches infinity, posterior consistency of the model is guaranteed for general data-generating processes under regular conditions...
October 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Gregory Ditzler, Joseph LaBarck, James Ritchie, Gail Rosen, Robi Polikar
Feature subset selection can be used to sieve through large volumes of data and discover the most informative subset of variables for a particular learning problem. Yet, due to memory and other resource constraints (e.g., CPU availability), many of the state-of-the-art feature subset selection methods cannot be extended to high dimensional data, or data sets with an extremely large volume of instances. In this brief, we extend online feature selection (OFS), a recently introduced approach that uses partial feature information, by developing an ensemble of online linear models to make predictions...
October 11, 2017: IEEE Transactions on Neural Networks and Learning Systems
Jiabei Zeng, Yang Liu, Biao Leng, Zhang Xiong, Yiu-Ming Cheung
Supervised dimensionality reduction (DR) plays an important role in learning systems with high-dimensional data. It projects the data into a low-dimensional subspace and keeps the projected data distinguishable in different classes. In addition to preserving the discriminant information for binary or multiple classes, some real-world applications also require keeping the preference degrees of assigning the data to multiple aspects, e.g., to keep the different intensities for co-occurring facial expressions or the product ratings in different aspects...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Josey Mathew, Chee Khiang Pang, Ming Luo, Weng Hoe Leong
Historical data sets for fault stage diagnosis in industrial machines are often imbalanced and consist of multiple categories or classes. Learning discriminative models from such data sets is challenging due to the lack of representative data and the bias of traditional classifiers toward the majority class. Sampling methods like synthetic minority oversampling technique (SMOTE) have been traditionally used for such problems to artificially balance the data set before being trained by a classifier. This paper proposes a weighted kernel-based SMOTE (WK-SMOTE) that overcomes the limitation of SMOTE for nonlinear problems by oversampling in the feature space of support vector machine (SVM) classifier...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Chao Shi, Zongcheng Liu, Xinmin Dong, Yong Chen
A novel tracking error-compensation-based adaptive neural control scheme is proposed for a class of high-order nonlinear systems with completely unknown nonlinearities and input delay. In the tracking errors of existing papers, there exist the following difficulties: first, output curve always lags behind the desired trajectory, second, some big peak errors cause a decrease in tracking precision, and third, a big initial value of the modified tracking error can make the closed-loop system unstable. To tackle them, three corresponding error-compensation terms are constructed, including a prediction and compensation term, an auxiliary signal produced by the constructed auxiliary system, and a damping term...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yuguang Yan, Qingyao Wu, Mingkui Tan, Michael K Ng, Huaqing Min, Ivor W Tsang
In this paper, we study the online heterogeneous transfer (OHT) learning problem, where the target data of interest arrive in an online manner, while the source data and auxiliary co-occurrence data are from offline sources and can be easily annotated. OHT is very challenging, since the feature spaces of the source and target domains are different. To address this, we propose a novel technique called OHT by hedge ensemble by exploiting both offline knowledge and online knowledge of different domains. To this end, we build an offline decision function based on a heterogeneous similarity that is constructed using labeled source data and unlabeled auxiliary co-occurrence data...
October 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yong Luo, Yonggang Wen, Dacheng Tao
Distance metric learning plays a crucial role in diverse machine learning algorithms and applications. When the labeled information in a target domain is limited, transfer metric learning (TML) helps to learn the metric by leveraging the sufficient information from other related domains. Multitask metric learning (MTML), which can be regarded as a special case of TML, performs transfer across all related domains. Current TML tools usually assume that the same feature representation is exploited for different domains...
October 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Biao Luo, Derong Liu, Huai-Ning Wu
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice...
October 3, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xian-Ming Zhang, Wen-Juan Lin, Qing-Long Han, Yong He, Min Wu
This brief is concerned with global asymptotic stability of a neural network with a time-varying delay. First, by introducing an auxiliary vector with some nonorthogonal polynomials, a slack-matrix-based integral inequality is established, which includes some existing one as its special case. Second, a novel Lyapunov-Krasovskii functional is constructed to suit for the use of the obtained integral inequality. As a result, a less conservative stability criterion is derived, whose effectiveness is finally demonstrated through two well-used numerical examples...
October 3, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xu Shen, Xinmei Tian, Tongliang Liu, Fang Xu, Dacheng Tao
Dropout has been proven to be an effective algorithm for training robust deep networks because of its ability to prevent overfitting by avoiding the co-adaptation of feature detectors. Current explanations of dropout include bagging, naive Bayes, regularization, and sex in evolution. According to the activation patterns of neurons in the human brain, when faced with different situations, the firing rates of neurons are random and continuous, not binary as current dropout does. Inspired by this phenomenon, we extend the traditional binary dropout to continuous dropout...
October 3, 2017: IEEE Transactions on Neural Networks and Learning Systems
Qiaolin Ye, Henghao Zhao, Zechao Li, Xubing Yang, Shangbing Gao, Tongming Yin, Ning Ye
Twin support vector clustering (TWSVC) is a recently proposed powerful k-plane clustering method. It, however, is prone to outliers due to the utilization of squared L2-norm distance. Besides, TWSVC is computationally expensive, attributing to the need of solving a series of constrained quadratic programming problems (CQPPs) in learning each clustering plane. To address these problems, this brief first develops a new k-plane clustering method called L1-norm distance minimization-based robust TWSVC by using robust L1-norm distance...
October 3, 2017: IEEE Transactions on Neural Networks and Learning Systems
Chia-Hsiang Lin, Chong-Yung Chi, Lulu Chen, David J Miller, Yue Wang
While non-negative blind source separation (nBSS) has found many successful applications in science and engineering, model order selection, determining the number of sources, remains a critical yet unresolved problem. Various model order selection methods have been proposed and applied to real-world data sets but with limited success, with both order over- and under-estimation reported. By studying existing schemes, we have found that the unsatisfactory results are mainly due to invalid assumptions, model oversimplification, subjective thresholding, and/or to assumptions made solely for mathematical convenience...
October 3, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"