Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Zhenyuan Guo, Linlin Liu, Jun Wang
This paper is concerned with the multistability of switched neural networks with piecewise linear activation functions under state-dependent switching. Under some reasonable assumptions on the switching threshold and activation functions, by using the state-space decomposition method, contraction mapping theorem, and strictly diagonally dominant matrix theory, we can characterize the number of equilibria as well as analyze the stability/instability of the equilibria. More interesting, we can find that the switching threshold plays an important role for stable equilibria in the unsaturation regions of activation functions, and the number of stable equilibria of an n-neuron switched neural network with state-dependent parameters increases to 3ⁿ from 2ⁿ in the conventional one...
November 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
Shi-Lu Dai, Shude He, Min Wang, Chengzhi Yuan
This paper presents adaptive neural tracking control of underactuated surface vessels with modeling uncertainties and time-varying external disturbances, where the tracking errors consisting of position and orientation errors are required to keep inside their predefined feasible regions in which the controller singularity problem does not happen. To provide the preselected specifications on the transient and steady-state performances of the tracking errors, the boundary functions of the predefined regions are taken as exponentially decaying functions of time...
November 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
Xuelong Li, Quanmao Lu, Yongsheng Dong, Dacheng Tao
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF)...
November 12, 2018: IEEE Transactions on Neural Networks and Learning Systems
Shuang Wu, Guoqi Li, Lei Deng, Liu Liu, Dong Wu, Yuan Xie, Luping Shi
Batch normalization (BN) has recently become a standard component for accelerating and improving the training of deep neural networks (DNNs). However, BN brings in additional calculations, consumes more memory, and significantly slows down the training iteration. Furthermore, the nonlinear square and sqrt operations in the normalization process impede low bit-width quantization techniques, which draw much attention to the deep learning hardware community. In this paper, we propose an L1-norm BN (L1BN) with only linear operations in both forward and backward propagations during training...
November 9, 2018: IEEE Transactions on Neural Networks and Learning Systems
Seaar Al-Dabooni, Donald Wunsch
This paper provides the stability analysis for a model-free action-dependent heuristic dynamic programing (HDP) approach with an eligibility trace long-term prediction parameter (λ). HDP(λ) learns from more than one future reward. Eligibility traces have long been popular in Q-learning. This paper proves and demonstrates that they are worthwhile to use with HDP. In this paper, we prove its uniformly ultimately bounded (UUB) property under certain conditions. Previous works present a UUB proof for traditional HDP [HDP(λ=0)], but we extend the proof with the λ parameter...
November 9, 2018: IEEE Transactions on Neural Networks and Learning Systems
Ruihan Hu, Sheng Chang, Hao Wang, Jin He, Qijun Huang
Error functions are normally based on the distance between output spikes and target spikes in supervised learning algorithms for spiking neural networks (SNNs). Due to the discontinuous nature of the internal state of spiking neuron, it is challenging to ensure that the number of output spikes and target spikes kept identical in multispike learning. This problem is conventionally dealt with by using the smaller of the number of desired spikes and that of actual output spikes in learning. However, if this approach is used, information is lost as some spikes are neglected...
November 6, 2018: IEEE Transactions on Neural Networks and Learning Systems
Qian Zhang, Jie Lu, Dianshuang Wu, Guangquan Zhang
The aim of recommender systems is to automatically identify user preferences within collected data, then use those preferences to make recommendations that help with decisions. However, recommender systems suffer from data sparsity problem, which is particularly prevalent in newly launched systems that have not yet had enough time to amass sufficient data. As a solution, cross-domain recommender systems transfer knowledge from a source domain with relatively rich data to assist recommendations in the target domain...
November 6, 2018: IEEE Transactions on Neural Networks and Learning Systems
Weiping Ding, Chin-Teng Lin, Zehong Cao
The unprecedented increase in data volume has become a severe challenge for conventional patterns of data mining and learning systems tasked with handling big data. The recently introduced Spark platform is a new processing method for big data analysis and related learning systems, which has attracted increasing attention from both the scientific community and industry. In this paper, we propose a shared nearest-neighbor quantum game-based attribute reduction (SNNQGAR) algorithm that incorporates the hierarchical coevolutionary Spark model...
November 6, 2018: IEEE Transactions on Neural Networks and Learning Systems
Xiaolin Xiao, Yicong Zhou
Benefited from quaternion representation that is able to encode the cross-channel correlation of color images, quaternion principle component analysis (QPCA) was proposed to extract features from color images while reducing the feature dimension. A quaternion covariance matrix (QCM) of input samples was constructed, and its eigenvectors were derived to find the solution of QPCA. However, eigen-decomposition leads to the fixed solution for the same input. This solution is susceptible to outliers and cannot be further optimized...
November 6, 2018: IEEE Transactions on Neural Networks and Learning Systems
Yongduan Song, Liu He, Dong Zhang, Jiye Qian, Jin Fu
This paper investigates the position and attitude tracking control problem of a quadrotor unmanned aerial vehicle subject to modeling uncertainties and actuator failures. A comprehensive mathematical model reflecting the nonlinearity and state-space coupling of the dynamics as well as actuation faults and external disturbances is derived. By combining the radial basis function neural networks (NNs) with virtual parameter estimating algorithms, an indirect NN-based adaptive fault-tolerant control scheme is developed, which exhibits several attractive features as compared with most existing methods: 1) it is not only robust and adaptive to nonparametric uncertainties but also tolerant to unexpected actuation faults; 2) it ensures stable tracking without the need for precise information on system model; and 3) it only involves one lumped parameter adaptation, thus is structurally simpler and computationally less expensive, rendering the resultant scheme less demanding in programming and more affordable for onboard implementation...
November 5, 2018: IEEE Transactions on Neural Networks and Learning Systems
Deyuan Meng
Learning from saved measurement and control data to refine the performance of output tracking is the core feature of iterative learning control (ILC). Even though this implementation process of ILC does not need any model knowledge, ILC typically requires the strict repetitiveness of the control systems, especially on the plant models of them. The questions of interest in this paper are: 1) whether and how can robust ILC problems be solved with respect to the nonrepetitive (or iteration-dependent) model uncertainties and 2) can convergence conditions be developed with the effective contraction mapping (CM)-based approach to ILC? The answers to these questions are affirmative, and the CM-based approach is applicable to robust ILC that accommodates certain nonrepetitive uncertainties, especially in the plant models...
November 5, 2018: IEEE Transactions on Neural Networks and Learning Systems
Peng Tang, Xinggang Wang, Baoguang Shi, Xiang Bai, Wenyu Liu, Zhuowen Tu
Despite the great success of convolutional neural networks (CNNs) for the image classification task on data sets such as Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with images that have a large variation in size and clutter, where Fisher vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian mixture model (GMM). FV, however, has limited learning capability and its parameters are mostly fixed after constructing the codebook...
November 5, 2018: IEEE Transactions on Neural Networks and Learning Systems
Kai Li, Zhengming Ding, Sheng Li, Yun Fu
Person reidentification (ReID) has recently been widely investigated for its vital role in surveillance and forensics applications. This paper addresses the low-resolution (LR) person ReID problem, which is of great practical meaning because pedestrians are often captured in LRs by surveillance cameras. Existing methods cope with this problem via some complicated and time-consuming strategies, making them less favorable, in practice, and meanwhile, their performances are far from satisfactory. Instead, we solve this problem by developing a discriminative semicoupled projective dictionary learning (DSPDL) model, which adopts the efficient projective dictionary learning strategy, and jointly learns a pair of dictionaries and a mapping function to model the correspondence of the cross-view data...
November 2, 2018: IEEE Transactions on Neural Networks and Learning Systems
Xingxing Zhang, Zhenfeng Zhu, Yao Zhao, Dongxia Chang, Ji Liu
Prototype selection aims to remove redundancy and irrelevance from large-scale data by selecting an informative subset, which makes it possible to see all data from a few prototypes. However, due to the outliers and uncertain distribution of the data, the selected prototypes are generally less representative and diversified. To alleviate this issue, we develop, in this paper, a ℓ1-norm-induced discriminative prototype selection model (ℓ1-ProSe). Inspired by the good performance of sparse representation, the sparsity property of data is rationally exploited in the formulated model...
November 2, 2018: IEEE Transactions on Neural Networks and Learning Systems
Antonia Creswell, Anil Anthony Bharath
Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample...
November 2, 2018: IEEE Transactions on Neural Networks and Learning Systems
Matthias Freiberger, Andrew Katumba, Peter Bienstman, Joni Dambre
As Moore's law comes to an end, neuromorphic approaches to computing are on the rise. One of these, passive photonic reservoir computing, is a strong candidate for computing at high bitrates (>10 Gb/s) and with low energy consumption. Currently though, both benefits are limited by the necessity to perform training and readout operations in the electrical domain. Thus, efforts are currently underway in the photonic community to design an integrated optical readout, which allows to perform all operations in the optical domain...
October 31, 2018: IEEE Transactions on Neural Networks and Learning Systems
Xiaofeng Cao, Baozhi Qiu, Xiangli Li, Zenglin Shi, Guandong Xu, Jianliang Xu
The balance of neighborhood space around a central point is an important concept in cluster analysis. It can be used to effectively detect cluster boundary objects. The existing neighborhood analysis methods focus on the distribution of data, i.e., analyzing the characteristic of the neighborhood space from a single perspective, and could not obtain rich data characteristics. In this paper, we analyze the high-dimensional neighborhood space from multiple perspectives. By simulating each dimension of a data point's k nearest neighbors space (kNNs) as a lever, we apply the lever principle to compute the balance fulcrum of each dimension after proving its inevitability and uniqueness...
October 31, 2018: IEEE Transactions on Neural Networks and Learning Systems
Seaar Al-Dabooni, Donald Wunsch
This paper presents an improved method for reducing high-order dynamical system models via clustering. Agglomerative hierarchical clustering based on performance evaluation (HC-PE) is introduced for model order reduction. This method computes the reduced order denominator of the transfer function model by clustering system poles in a hierarchical dendrogram. The base layer represents an nth order system, which is used to calculate each successive layer to reduce the model order until finally reaching a second-order system...
October 31, 2018: IEEE Transactions on Neural Networks and Learning Systems
Yan Ru Pei, Fabio L Traversa, Massimiliano Di Ventra
Universal memcomputing machines (UMMs) represent a novel computational model in which memory (time nonlocality) accomplishes both tasks of storing and processing of information. UMMs have been shown to be Turing-complete, namely, they can simulate any Turing machine. In this paper, we first introduce a novel set theory approach to compare different computational models and use it to recover the previous results on Turing-completeness of UMMs. We then relate UMMs directly to liquid-state machines (or ``reservoir-computing'') and quantum machines (``quantum computing'')...
October 31, 2018: IEEE Transactions on Neural Networks and Learning Systems
Qiang Xiao, Tingwen Huang, Zhigang Zeng
This paper considers generalized discrete-time inertial neural network (GDINN). By timescale theory, the original network is rewritten as a timescale-type inertial NN. Two different scenarios are considered. In a first scenario, several criteria guaranteeing the global exponential stability for the addressed GDINN are obtained based on the generalized matrix measure concept. In this case, Lyapunov function or functional is not necessary. In a second scenario, some inequality analytical and scaling techniques are used to achieve the global exponential stability for the considered GDINN...
October 30, 2018: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"