journal
MENU ▼
Read by QxMD icon Read
search

IEEE Transactions on Neural Networks and Learning Systems

journal
https://www.readbyqxmd.com/read/28436906/nonlinear-decoupling-control-with-anfis-based-unmodeled-dynamics-compensation-for-a-class-of-complex-industrial-processes
#1
Yajun Zhang, Tianyou Chai, Hong Wang, Dianhui Wang, Xinkai Chen
Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436904/user-preference-based-dual-memory-neural-model-with-memory-consolidation-approach
#2
Jauwairia Nasir, Yong-Ho Yoo, Deok-Hwa Kim, Jong-Hwan Kim
Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436896/online-hashing
#3
Long-Kai Huang, Qiang Yang, Wei-Shi Zheng
Although hash function learning algorithms have achieved great success in recent years, most existing hash models are off-line, which are not suitable for processing sequential or online data. To address this problem, this paper proposes an online hash model to accommodate data coming in stream for online learning. Specifically, a new loss function is proposed to measure the similarity loss between a pair of data samples in hamming space. Then, a structured hash model is derived and optimized in a passive-aggressive way...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436894/online-learning-algorithms-can-converge-comparably-fast-as-batch-learning
#4
Junhong Lin, Ding-Xuan Zhou
Online learning algorithms in a reproducing kernel Hilbert space associated with convex loss functions are studied. We show that in terms of the expected excess generalization error, they can converge comparably fast as corresponding kernel-based batch learning algorithms. Under mild conditions on loss functions and approximation errors, fast learning rates and finite sample upper bounds are established using polynomially decreasing step-size sequences. For some commonly used loss functions for classification, such as the logistic and the p-norm hinge loss functions with p ∈ [1,2], the learning rates are the same as those for Tikhonov regularization and can be of order O(T-(1/2) log T), which are nearly optimal up to a logarithmic factor...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436893/a-parallel-multiclassification-algorithm-for-big-data-using-an-extreme-learning-machine
#5
Mingxing Duan, Kenli Li, Xiangke Liao, Keqin Li
As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436892/godec-fast-and-robust-low-rank-matrix-decomposition-based-on-maximum-correntropy
#6
Kailing Guo, Liu Liu, Xiangmin Xu, Dong Xu, Dacheng Tao
GoDec is an efficient low-rank matrix decomposition algorithm. However, optimal performance depends on sparse errors and Gaussian noise. This paper aims to address the problem that a matrix is composed of a low-rank component and unknown corruptions. We introduce a robust local similarity measure called correntropy to describe the corruptions and, in doing so, obtain a more robust and faster low-rank decomposition algorithm: GoDec+. Based on half-quadratic optimization and greedy bilateral paradigm, we deliver a solution to the maximum correntropy criterion (MCC)-based low-rank decomposition problem...
April 24, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436895/online-learning-algorithm-based-on-adaptive-control-theory
#7
Jian-Wei Liu, Jia-Jia Zhou, Mohamed S Kamel, Xiong-Lin Luo
This paper proposes a new online learning algorithm which is based on adaptive control (AC) theory, thus, we call this proposed algorithm as AC algorithm. Comparing to the gradient descent (GD) and exponential gradient (EG) algorithm which have been applied to online prediction problems, we find a new form of AC theory for online prediction problems and investigate two key questions: how to get a new update law which has a tighter upper bound on the error than the square loss? How to compare the upper bound for accumulated losses for the three algorithms? We obtain a new update law which fully utilizes model reference AC theory...
April 18, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436905/robust-structured-nonnegative-matrix-factorization-for-image-representation
#8
Zechao Li, Jinhui Tang, Xiaofei He
Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the ℓ2,p-norm (especially when 0 < p ≤ q 1) loss function...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436903/robust-multiview-data-analysis-through-collective-low-rank-subspace
#9
Zhengming Ding, Yun Fu
Multiview data are of great abundance in real-world applications, since various viewpoints and multiple sensors desire to represent the data in a better way. Conventional multiview learning methods aimed to learn multiple view-specific transformations meanwhile assumed the view knowledge of training, and test data were available in advance. However, they would fail when we do not have any prior knowledge for the probe data's view information, since the correct view-specific projections cannot be utilized to extract effective feature representations...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436902/learning-to-predict-consequences-as-a-method-of-knowledge-transfer-in-reinforcement-learning
#10
Eric Chalmers, Edgar Bermudez Contreras, Brandon Robertson, Artur Luczak, Aaron Gruber
The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436901/on-better-exploring-and-exploiting-task-relationships-in-multitask-learning-joint-model-and-feature-learning
#11
Ya Li, Xinmei Tian, Tongliang Liu, Dacheng Tao
Multitask learning (MTL) aims to learn multiple tasks simultaneously through the interdependence between different tasks. The way to measure the relatedness between tasks is always a popular issue. There are mainly two ways to measure relatedness between tasks: common parameters sharing and common features sharing across different tasks. However, these two types of relatedness are mainly learned independently, leading to a loss of information. In this paper, we propose a new strategy to measure the relatedness that jointly learns shared parameters and shared feature representations...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436900/a-discrete-time-recurrent-neural-network-for-solving-rank-deficient-matrix-equations-with-an-application-to-output-regulation-of-linear-systems
#12
Tao Liu, Jie Huang
This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436899/adaptive-backstepping-based-neural-tracking-control-for-mimo-nonlinear-switched-systems-subject-to-input-delays
#13
Ben Niu, Lu Li
This brief proposes a new neural-network (NN)-based adaptive output tracking control scheme for a class of disturbed multiple-input multiple-output uncertain nonlinear switched systems with input delays. By combining the universal approximation ability of radial basis function NNs and adaptive backstepping recursive design with an improved multiple Lyapunov function (MLF) scheme, a novel adaptive neural output tracking controller design method is presented for the switched system. The feature of the developed design is that different coordinate transformations are adopted to overcome the conservativeness caused by adopting a common coordinate transformation for all subsystems...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436898/extended-polynomial-growth-transforms-for-design-and-training-of-generalized-support-vector-machines
#14
Ahana Gangopadhyay, Oindrila Chatterjee, Shantanu Chakrabartty
Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436897/tensor-factorized-neural-networks
#15
Jen-Tzung Chien, Yi-Ting Bao
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28436891/off-policy-reinforcement-learning-for-synchronization-in-multiagent-graphical-games
#16
Jinna Li, Hamidreza Modares, Tianyou Chai, Frank L Lewis, Lihua Xie
This paper develops an off-policy reinforcement learning (RL) algorithm to solve optimal synchronization of multiagent systems. This is accomplished by using the framework of graphical games. In contrast to traditional control protocols, which require complete knowledge of agent dynamics, the proposed off-policy RL algorithm is a model-free approach, in that it solves the optimal synchronization problem without knowing any knowledge of the agent dynamics. A prescribed control policy, called behavior policy, is applied to each agent to generate and collect data for learning...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28422671/structured-learning-of-tree-potentials-in-crf-for-image-segmentation
#17
Fayao Liu, Guosheng Lin, Ruizhi Qiao, Chunhua Shen
We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields (CRFs) and decision trees. In the literature, the potential functions of CRFs are mostly defined as a linear combination of some predefined parametric models, and then, methods, such as structured support vector machines, are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests--ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework...
April 13, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28422670/observer-based-robust-coordinated-control-of-multiagent-systems-with-input-saturation
#18
Xiaoling Wang, Housheng Su, Michael Z Q Chen, Xiaofan Wang
This paper addresses the robust semiglobal coordinated control of multiple-input multiple-output multiagent systems with input saturation together with dead zone and input additive disturbance. Observer-based coordinated control protocol is constructed, by combining the parameterized low-and-high-gain feedback technique and the high-gain observer design approach. It is shown that, under some mild assumptions on agents' intrinsic dynamics, the robust semiglobal consensus or robust semiglobal swarm can be approached for undirected connected multiagent systems...
April 13, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28422698/dissipativity-based-resilient-filtering-of-periodic-markovian-jump-neural-networks-with-quantized-measurements
#19
Renquan Lu, Jie Tao, Peng Shi, Hongye Su, Zheng-Guang Wu, Yong Xu
The problem of dissipativity-based resilient filtering for discrete-time periodic Markov jump neural networks in the presence of quantized measurements is investigated in this paper. Due to the limited capacities of network medium, a logarithmic quantizer is applied to the underlying systems. Considering the fact that the filter is realized through a network, randomly occurring parameter uncertainties of the filter are modeled by two mode-dependent Bernoulli processes. By establishing the mode-dependent periodic Lyapunov function, sufficient conditions are given to ensure the stability and dissipativity of the filtering error system...
April 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28422697/manifold-regularized-correlation-object-tracking
#20
Hongwei Hu, Bo Ma, Jianbing Shen, Ling Shao
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions...
April 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"