Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Xichuan Zhou, Shengli Li, Fang Tang, Shengdong Hu, Zhi Lin, Lei Zhang
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections...
July 18, 2017: IEEE Transactions on Neural Networks and Learning Systems
Furui Liu, Lai-Wan Chan
In this paper, we deal with the problem of inferring causal relations for multidimensional data. Based on the postulate that the distribution of the cause and the conditional distribution of the effect given cause are generated independently, we show that the covariance matrix of the mean embedding of the cause in reproducing kernel Hilbert space (RKHS) is free independent with the covariance matrix of the conditional embedding of the effect given cause. This, called freeness condition, induces a cause-effect asymmetry that a designed measurement is $0$ in the causal direction but smaller than $0$ in the anticausal direction, and it uncovers the causal direction...
July 18, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yang Liu, Jinde Cao, Bowen Li, Jianquan Lu
In this brief, we first study the normalization of dynamic-algebraic Boolean networks (DABNs). A new expression for the normalized DABNs is obtained. As applications of this result, the solvability and uniqueness of the solution to DABNs are then investigated. Necessary and sufficient conditions for the solvability and the uniqueness are obtained. In addition, pinning control to ensure the solvability and uniqueness of the solution to DABNs is also studied. Numerical examples are given to illustrate the efficiency of the proposed results...
July 14, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yujuan Wang, Yongduan Song, Wei Ren
This paper presents a distributed adaptive finite-time control solution to the formation-containment problem for multiple networked systems with uncertain nonlinear dynamics and directed communication constraints. By integrating the special topology feature of the new constructed symmetrical matrix, the technical difficulty in finite-time formation-containment control arising from the asymmetrical Laplacian matrix under single-way directed communication is circumvented. Based upon fractional power feedback of the local error, an adaptive distributed control scheme is established to drive the leaders into the prespecified formation configuration in finite time...
July 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Lei Liu, Jinde Cao, Cheng Qian
In this paper, the pth moment input-to-state exponential stability for delayed recurrent neural networks (DRNNs) with Markovian switching is studied. By using stochastic analysis techniques and classical Razumikhin techniques, a generalized vector L-operator differential inequality including cross item is obtained. Without additional restrictive conditions on the time-varying delay, the sufficient criteria on the pth moment input-to-state exponential stability for DRNNs with Markovian switching are derived by means of the vector L-operator differential inequality...
July 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Sumit Bam Shrestha, Qing Song
Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training...
July 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Dong Wang, Xiaoyang Tan
Learning a distance metric in feature space potentially improves the performance of the K nearest neighbor classifier and is useful in many real-world applications. Many metric learning (ML) algorithms are, however, based on the point estimation of a quadratic optimization problem, which is time-consuming, susceptible to overfitting, and lacks a natural mechanism to reason with parameter uncertainty--a property useful especially when the training set is small and/or noisy. To deal with these issues, we present a novel Bayesian ML (BML) method, called Bayesian neighborhood component analysis (NCA), based on the well-known NCA method, in which the metric posterior is characterized by the local label consistency constraints of observations, encoded with a similarity graph instead of independent pairwise constraints...
July 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Zheng Zhang, Yong Xu, Ling Shao, Jian Yang
Existing block-diagonal representation studies mainly focuses on casting block-diagonal regularization on training data, while only little attention is dedicated to concurrently learning both block-diagonal representations of training and test data. In this paper, we propose a discriminative block-diagonal low-rank representation (BDLRR) method for recognition. In particular, the elaborate BDLRR is formulated as a joint optimization problem of shrinking the unfavorable representation from off-block-diagonal elements and strengthening the compact block-diagonal representation under the semisupervised framework of LRR...
July 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xu Yang, Hong Qiao, Zhi-Yong Liu
We propose a weighted common subgraph (WCS) matching algorithm to find the most similar subgraphs in two labeled weighted graphs. WCS matching, as a natural generalization of equal-sized graph matching and subgraph matching, has found wide applications in many computer vision and machine learning tasks. In this brief, WCS matching is first formulated as a combinatorial optimization problem over the set of partial permutation matrices. Then, it is approximately solved by a recently proposed combinatorial optimization framework--graduated nonconvexity and concavity procedure...
July 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Ying Lu, Liming Chen, Alexandre Saidi, Emmanuel Dellandrea, Yunhong Wang
Transfer learning (TL) aims at solving the problem of learning an effective classification model for a target category, which has few training samples, by leveraging knowledge from source categories with far more training data. We propose a new discriminative TL (DTL) method, combining a series of hypotheses made by both the model learned with target training samples and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary...
July 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
Wei Luo, Jun Li, Jian Yang, Wei Xu, Jian Zhang
Convolutional sparse coding (CSC) can model local connections between image content and reduce the code redundancy when compared with patch-based sparse coding. However, CSC needs a complicated optimization procedure to infer the codes (i.e., feature maps). In this brief, we proposed a convolutional sparse auto-encoder (CSAE), which leverages the structure of the convolutional AE and incorporates the max-pooling to heuristically sparsify the feature maps for feature learning. Together with competition over feature channels, this simple sparsifying strategy makes the stochastic gradient descent algorithm work efficiently for the CSAE training; thus, no complicated optimization procedure is involved...
June 29, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yongming Li, Shaocheng Tong
In this paper, an adaptive neural networks (NNs)-based decentralized control scheme with the prescribed performance is proposed for uncertain switched nonstrict-feedback interconnected nonlinear systems. It is assumed that nonlinear interconnected terms and nonlinear functions of the concerned systems are unknown, and also the switching signals are unknown and arbitrary. A linear state estimator is constructed to solve the problem of unmeasured states. The NNs are employed to approximate unknown interconnected terms and nonlinear functions...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Chengwei Wu, Jianxing Liu, Yongyang Xiong, Ligang Wu
This paper studies an output-based adaptive fault-tolerant control problem for nonlinear systems with nonstrict-feedback form. Neural networks are utilized to identify the unknown nonlinear characteristics in the system. An observer and a general fault model are constructed to estimate the unavailable states and describe the fault, respectively. Adaptive parameters are constructed to overcome the difficulties in the design process for nonstrict-feedback systems. Meanwhile, dynamic surface control technique is introduced to avoid the problem of ''explosion of complexity''...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Zhi Xiao, Zhe Luo, Bo Zhong, Xin Dang
Well known for its simplicity and effectiveness in classification, AdaBoost, however, suffers from overfitting when class-conditional distributions have significant overlap. Moreover, it is very sensitive to noise that appears in the labels. This paper tackles the above limitations simultaneously via optimizing a modified loss function (i.e., the conditional risk). The proposed approach has the following two advantages. First, it is able to directly take into account label uncertainty with an associated label confidence...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Feihu Huang, Songcan Chen, Sheng-Jun Huang
In this paper, we propose a joint conditional graphical Lasso to learn multiple conditional Gaussian graphical models, also known as Gaussian conditional random fields, with some similar structures. Our model builds on the maximum likelihood method with the convex sparse group Lasso penalty. Moreover, our model is able to model multiple multivariate linear regressions with unknown noise covariances via a convex formulation. In addition, we develop an efficient approximated Newton's method for optimizing our model...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Chao Du, Jun Zhu, Bo Zhang
Deep generative models (DGMs), which are often organized in a hierarchical manner, provide a principled framework of capturing the underlying causal factors of data. Recent work on DGMs focussed on the development of efficient and scalable variational inference methods that learn a single model under some mean-field or parameterization assumptions. However, little work has been done on extending Markov chain Monte Carlo (MCMC) methods to Bayesian DGMs, which enjoy many advantages compared with variational methods...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Tingting Yu, Jianxing Liu, Yi Zeng, Xian Zhang, Qingshuang Zeng, Ligang Wu
This paper is concerned with the exponential stability analysis of genetic regulatory networks (GRNs) with switching parameters and time delays. In this paper, a new integral inequality and an improved reciprocally convex combination inequality are considered. By using the average dwell time approach together with a novel Lyapunov-Krasovskii functional, we derived some conditions to ensure the switched GRNs with switching parameters and time delays are exponentially stable. Finally, we give two numerical examples to clarify that our derived results are effective...
June 28, 2017: IEEE Transactions on Neural Networks and Learning Systems
Ali Heydari
This paper is focused on bandwidth allocation in nonlinear networked control systems. The objective is optimal triggering/scheduling for transmitting sensor measurements to the controller through a communication network. An algorithm based on approximate dynamic programming is developed for problems with fixed final times and then the result is extended to problems with infinite horizon. Zero-order-hold (ZOH), generalized ZOH, and networks with packet dropouts are the investigated cases. Problems with unknown models are also addressed and a model-free scheme is established for learning the (approximate) optimal solution...
June 27, 2017: IEEE Transactions on Neural Networks and Learning Systems
Peng Liu, Zhigang Zeng, Jun Wang
This paper is concerned with the coexistence of multiple equilibrium points and dynamical behaviors of recurrent neural networks with nonmonotonic activation functions and unbounded time-varying delays. Based on a state space partition by using the geometrical properties of the activation functions, it is revealed that an n-neuron neural network can exhibit ∏i=1n(2Ki+1) equilibrium points with Ki≥0. In particular, several sufficient criteria are proposed to ascertain the asymptotical stability of ∏i=1n(Ki+1) equilibrium points for recurrent neural networks...
June 27, 2017: IEEE Transactions on Neural Networks and Learning Systems
Changzhong Wang, Qinghua Hu, Xizhao Wang, Degang Chen, Yuhua Qian, Zhe Dong
Feature selection is viewed as an important preprocessing step for pattern recognition, machine learning, and data mining. Neighborhood is one of the most important concepts in classification learning and can be used to distinguish samples with different decisions. In this paper, a neighborhood discrimination index is proposed to characterize the distinguishing information of a neighborhood relation. It reflects the distinguishing ability of a feature subset. The proposed discrimination index is computed by considering the cardinality of a neighborhood relation rather than neighborhood similarity classes...
June 23, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"