journal
MENU ▼
Read by QxMD icon Read
search

IEEE Transactions on Neural Networks and Learning Systems

journal
https://www.readbyqxmd.com/read/30222587/dualityfree-methods-for-stochastic-composition-optimization
#1
Liu Liu, Ji Liu, Dacheng Tao
In this paper, we consider the composition optimization with two expected-value functions in the form of (1/n)Σni = 1 Fi((1/m)Σmj = 1 Gj(x))+R(x), which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding. Full gradient- or classical stochastic gradient descent-based optimization algorithms are unsuitable or computationally expensive to solve this problem due to the inner expectation (1/m)Σmj = 1 Gj(x)...
September 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30222586/neural-learning-control-of-strict-feedback-systems-using-disturbance-observer
#2
Bin Xu, Yingxin Shou, Jun Luo, Huayan Pu, Zhongke Shi
This paper studies the compound learning control of disturbed uncertain strict-feedback systems. The design is using the dynamic surface control equipped with a novel learning scheme. This paper integrates the recently developed online recorded data-based neural learning with the nonlinear disturbance observer (DOB) to achieve good ``understanding'' of the system uncertainty including unknown dynamics and time-varying disturbance. With the proposed method to show how the neural networks and DOB are cooperating with each other, one indicator is constructed and included into the update law...
September 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30222585/event-based-line-fitting-and-segment-detection-using-a-neuromorphic-visual-sensor
#3
David Reverter Valeiras, Xavier Clady, Sio-Hoi Ieng, Ryad Benosman
This paper introduces an event-based luminance-free algorithm for line and segment detection from the output of asynchronous event-based neuromorphic retinas. These recent biomimetic vision sensors are composed of autonomous pixels, each of them asynchronously generating visual events that encode relative changes in pixels' illumination at high temporal resolutions. This frame-free approach results in an increased energy efficiency and in real-time operation, making these sensors especially suitable for applications such as autonomous robotics...
September 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30207965/universal-approximation-capability-of-broad-learning-system-and-its-structural-variations
#4
C L Philip Chen, Zhulin Liu, Shuang Feng
After a very fast and efficient discriminative broad learning system (BLS) that takes advantage of flatted structure and incremental learning has been developed, here, a mathematical proof of the universal approximation property of BLS is provided. In addition, the framework of several BLS variants with their mathematical modeling is given. The variations include cascade, recurrent, and broad-deep combination structures. From the experimental results, the BLS and its variations outperform several exist learning algorithms on regression performance over function approximation, time series prediction, and face recognition databases...
September 10, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30183647/multiview-multitask-gaze-estimation-with-deep-convolutional-neural-networks
#5
Dongze Lian, Lina Hu, Weixin Luo, Yanyu Xu, Lixin Duan, Jingyi Yu, Shenghua Gao
Gaze estimation, which aims to predict gaze points with given eye images, is an important task in computer vision because of its applications in human visual attention understanding. Many existing methods are based on a single camera, and most of them only focus on either the gaze point estimation or gaze direction estimation. In this paper, we propose a novel multitask method for the gaze point estimation using multiview cameras. Specifically, we analyze the close relationship between the gaze point estimation and gaze direction estimation, and we use a partially shared convolutional neural networks architecture to simultaneously estimate the gaze direction and gaze point...
September 3, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30183646/neural-adaptive-backstepping-control-of-a-robotic-manipulator-with-prescribed-performance-constraint
#6
Qing Guo, Yi Zhang, Branko G Celler, Steven W Su
This paper presents an adaptive neural network (NN) control of a two-degree-of-freedom manipulator driven by an electrohydraulic actuator. To restrict the system output in a prescribed performance constraint, a weighted performance function is designed to guarantee the dynamic and steady tracking errors of joint angle in a required accuracy. Then, a radial-basis-function NN is constructed to train the unknown model dynamics of a manipulator by traditional backstepping control (TBC) and obtain the preliminary estimated model, which can replace the preknown dynamics in the backstepping iteration...
August 30, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30176610/developmental-resonance-network
#7
Gyeong-Moon Park, Jae-Woo Choi, Jong-Hwan Kim
Adaptive resonance theory (ART) networks deal with normalized input data only, which means that they need the normalization process for the raw input data, under the assumption that the upper and lower bounds of the input data are known in advance. Without such an assumption, ART networks cannot be utilized. To solve this problem and improve the learning performance, inspired by the ART networks, we propose a developmental resonance network (DRN) by employing new techniques of a global weight and node connection and grouping processes...
August 29, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30176609/%C3%A2-%C3%A2-norm-heteroscedastic-discriminant-analysis-under-mixture-of-gaussian-distributions
#8
Wenming Zheng, Cheng Lu, Zhouchen Lin, Tong Zhang, Zhen Cui, Wankou Yang
Fisher's criterion is one of the most popular discriminant criteria for feature extraction. It is defined as the generalized Rayleigh quotient of the between-class scatter distance to the within-class scatter distance. Consequently, Fisher's criterion does not take advantage of the discriminant information in the class covariance differences, and hence, its discriminant ability largely depends on the class mean differences. If the class mean distances are relatively large compared with the within-class scatter distance, Fisher's criterion-based discriminant analysis methods may achieve a good discriminant performance...
August 29, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30176607/learning-aggregated-transmission-propagation-networks-for-haze-removal-and-beyond
#9
Risheng Liu, Xin Fan, Minjun Hou, Zhiying Jiang, Zhongxuan Luo, Lei Zhang
Single-image dehazing is an important low-level vision task with many applications. Early studies have investigated different kinds of visual priors to address this problem. However, they may fail when their assumptions are not valid on specific images. Recent deep networks also achieve a relatively good performance in this task. But unfortunately, due to the disappreciation of rich physical rules in hazes, a large amount of data are required for their training. More importantly, they may still fail when there exist completely different haze distributions in testing images...
August 29, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30176608/domain-adaption-via-feature-selection-on-explicit-feature-map
#10
Wan-Yu Deng, Amaury Lendasse, Yew-Soon Ong, Ivor Wai-Hung Tsang, Lin Chen, Qing-Hua Zheng
In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space...
August 28, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30137017/flexible-affinity-matrix-learning-for-unsupervised-and-semisupervised-classification
#11
Xiaozhao Fang, Na Han, Wai Keung Wong, Shaohua Teng, Jigang Wu, Shengli Xie, Xuelong Li
In this paper, we propose a unified model called flexible affinity matrix learning (FAML) for unsupervised and semisupervised classification by exploiting both the relationship among data and the clustering structure simultaneously. To capture the relationship among data, we exploit the self-expressiveness property of data to learn a structured matrix in which the structures are induced by different norms. A rank constraint is imposed on the Laplacian matrix of the desired affinity matrix, so that the connected components of data are exactly equal to the cluster number...
August 22, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30137016/perception-coordination-network-a-neuro-framework-for-multimodal-concept-acquisition-and-binding
#12
You-Lu Xing, Xiao-Feng Shi, Fu-Rao Shen, Jin-Xi Zhao, Jing-Xin Pan, Ah-Hwee Tan
To simulate the concept acquisition and binding of different senses in the brain, a biologically inspired neural network model named perception coordination network (PCN) is proposed. It is a hierarchical structure, which is functionally divided into the primary sensory area (PSA), the primary sensory association area (SAA), and the higher order association area (HAA). The PSA contains feature neurons which respond to many elementary features, e.g., colors, shapes, syllables, and basic flavors. The SAA contains primary concept neurons which combine the elementary features in the PSA to represent unimodal concept of objects, e...
August 21, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30137015/stepsize-range-and-optimal-value-for-taylor-zhang-discretization-formula-applied-to-zeroing-neurodynamics-illustrated-via-future-equality-constrained-quadratic-programming
#13
Yunong Zhang, Huihui Gong, Min Yang, Jian Li, Xuyun Yang
In this brief, future equality-constrained quadratic programming (FECQP) is studied. Via a zeroing neurodynamics method, a continuous-time zeroing neurodynamics (CTZN) model is presented. By using Taylor-Zhang discretization formula to discretize the CTZN model, a Taylor-Zhang discrete-time zeroing neurodynamics (TZ-DTZN) model is presented to perform FECQP. Furthermore, we focus on the critical parameter of the TZ-DTZN model, i.e., stepsize. By theoretical analyses, we obtain an effective range of the stepsize, which guarantees the stability of the TZ-DTZN model...
August 21, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30137014/adaptive-learning-control-for-nonlinear-systems-with-randomly-varying-iteration-lengths
#14
Dong Shen, Jian-Xin Xu
This paper proposes adaptive iterative learning control (ILC) schemes for continuous-time parametric nonlinear systems with iteration lengths that randomly vary. As opposed to the existing ILC works that feature nonuniform trial lengths, this paper is applicable to nonlinear systems that do not satisfy the globally Lipschitz continuous condition. In addition, this paper introduces a novel composite energy function based on newly defined virtual tracking error information for proving the asymptotical convergence...
August 21, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30137013/active-learning-from-imbalanced-data-a-solution-of-online-weighted-extreme-learning-machine
#15
Hualong Yu, Xibei Yang, Shang Zheng, Changyin Sun
It is well known that active learning can simultaneously improve the quality of the classification model and decrease the complexity of training instances. However, several previous studies have indicated that the performance of active learning is easily disrupted by an imbalanced data distribution. Some existing imbalanced active learning approaches also suffer from either low performance or high time consumption. To address these problems, this paper describes an efficient solution based on the extreme learning machine (ELM) classification model, called active online-weighted ELM (AOW-ELM)...
August 21, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30130240/markov-boundary-based-outlier-mining
#16
Kui Yu, Huanhuan Chen
It is a grand challenge to identify the outliers existing in subspaces from a high-dimensional data set. A brute-force method is computationally prohibitive since it requires examining an exponential number of subspaces. Current state-of-the-art methods explore various heuristics to significantly prune subspaces, facing the tradeoff between the subspace completeness and search efficiency. In this brief, we discuss a principal type of subspace outliers whose behaviors are different from the others on individual attributes...
August 20, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30130239/spectral-embedded-adaptive-neighbors-clustering
#17
Qi Wang, Zequn Qin, Feiping Nie, Xuelong Li
Spectral clustering has been widely used in various aspects, especially the machine learning fields. Clustering with similarity matrix and low-dimensional representation of data is the main reason of its promising performance shown in spectral clustering. However, such similarity matrix and low-dimensional representation directly derived from input data may not always hold when the data are high dimensional and has complex distribution. First, the similarity matrix simply based on the distance measurement might not be suitable for all kinds of data...
August 20, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30130238/online-robust-low-rank-tensor-modeling-for-streaming-data-analysis
#18
Ping Li, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu, Shuicheng Yan
Tensor data (i.e., the data having multiple dimensions) are quickly growing in scale in many practical applications, which poses new challenges for data modeling and analysis approaches, such as high-order relations of large complexity, gross noise, and varying data scale. Existing low-rank data analysis methods, which are effective at analyzing matrix data, may fail in the regime of tensor data due to these challenges. A robust and scalable low-rank tensor modeling method is heavily desired. In this paper, we develop an online robust low-rank tensor modeling (ORLTM) method to address these challenges...
August 20, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30130237/adaptive-neural-state-feedback-tracking-control-of-stochastic-nonlinear-switched-systems-an-average-dwell-time-method
#19
Ben Niu, Ding Wang, Naif D Alotaibi, Fuad E Alsaadi
In this paper, the problem of adaptive neural state-feedback tracking control is considered for a class of stochastic nonstrict-feedback nonlinear switched systems with completely unknown nonlinearities. In the design procedure, the universal approximation capability of radial basis function neural networks is used for identifying the unknown compounded nonlinear functions, and a variable separation technique is employed to overcome the design difficulty caused by the nonstrict-feedback structure. The most outstanding novelty of this paper is that individual Lyapunov function of each subsystem is constructed by flexibly adopting the upper and lower bounds of the control gain functions of each subsystem...
August 20, 2018: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/30130236/denoising-adversarial-autoencoders
#20
Antonia Creswell, Anil Anthony Bharath
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space...
August 16, 2018: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"