journal
MENU ▼
Read by QxMD icon Read
search

IEEE Transactions on Neural Networks and Learning Systems

journal
https://www.readbyqxmd.com/read/28103560/identifying-objective-and-subjective-words-via-topic-modeling
#1
Hanqi Wang, Fei Wu, Weiming Lu, Yi Yang, Xi Li, Xuelong Li, Yueting Zhuang
It is observed that distinct words in a given document have either strong or weak ability in delivering facts (i.e., the objective sense) or expressing opinions (i.e., the subjective sense) depending on the topics they associate with. Motivated by the intuitive assumption that different words have varying degree of discriminative power in delivering the objective sense or the subjective sense with respect to their assigned topics, a model named as identified objective-subjective latent Dirichlet allocation (LDA) (iosLDA) is proposed in this paper...
January 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28092580/determination-of-the-edge-of-criticality-in-echo-state-networks-through-fisher-information-maximization
#2
Lorenzo Livi, Filippo Maria Bianchi, Cesare Alippi
It is a widely accepted fact that the computational capability of recurrent neural networks (RNNs) is maximized on the so-called "edge of criticality." Once the network operates in this configuration, it performs efficiently on a specific application both in terms of: 1) low prediction error and 2) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by the particular input signal driving the dynamics, a universal, application-independent method for determining the edge of criticality is still missing...
January 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28092579/cooperative-adaptive-output-regulation-for-second-order-nonlinear-multiagent-systems-with-jointly-connected-switching-networks
#3
Wei Liu, Jie Huang
This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed jointly connected switching networks. Second, by performing a new coordinate and input transformation, we convert our problem into the cooperative global robust stabilization problem of a more complex augmented system via the distributed internal model principle...
January 11, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28092578/experienced-gray-wolf-optimization-through-reinforcement-learning-and-neural-networks
#4
E Emary, Hossam M Zawbaa, Crina Grosan
In this paper, a variant of gray wolf optimization (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenge of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate, which influences the performance of the algorithm. Rather than using a global way to change this parameter for all the agents, we use reinforcement learning to set it on an individual basis...
January 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28060715/probabilistic-low-rank-multitask-learning
#5
Yu Kong, Ming Shao, Kang Li, Yun Fu
In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task...
January 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28060713/multiview-boosting-with-information-propagation-for-classification
#6
Jing Peng, Alex J Aved, Guna Seetharaman, Kannappan Palaniappan
Multiview learning has shown promising potential in many applications. However, most techniques are focused on either view consistency, or view diversity. In this paper, we introduce a novel multiview boosting algorithm, called Boost.SH, that computes weak classifiers independently of each view but uses a shared weight distribution to propagate information among the multiple views to ensure consistency. To encourage diversity, we introduce randomized Boost.SH and show its convergence to the greedy Boost.SH solution in the sense of minimizing regret using the framework of adversarial multiarmed bandits...
January 4, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28060714/adaptive-reliable-h%C3%A2-static-output-feedback-control-against-markovian-jumping-sensor-failures
#7
Ding Zhai, Liwei An, Dan Ye, Qingling Zhang
This paper investigates the adaptive H∞ static output feedback (SOF) control problem for continuous-time linear systems with stochastic sensor failures. A multi-Markovian variable is introduced to denote the failure scaling factors for each sensor. Different from the existing results, the failure parameters are stochastically jumping and their bounds of are unknown. An adaptive reliable H∞ SOF control method is proposed, where the controller parameters are updated automatically to compensate for the failure effects on systems...
January 2, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28060712/self-taught-low-rank-coding-for-visual-learning
#8
Sheng Li, Kang Li, Yun Fu
The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data...
January 2, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055925/global-asymptotic-stability-and-stabilization-of-neural-networks-with-general-noise
#9
Qihe Shan, Huaguang Zhang, Zhanshan Wang, Zhao Zhang
Neural networks (NNs) in the stochastic environment were widely modeled as stochastic differential equations, which were driven by white noise, such as Brown or Wiener process in the existing papers. However, they are not necessarily the best models to describe dynamic characters of NNs disturbed by nonwhite noise in some specific situations. In this paper, general noise disturbance, which may be nonwhite, is introduced to NNs. Since NNs with nonwhite noise cannot be described by Itô integral equation, a novel modeling method of stochastic NNs is utilized...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055924/robust-c-loss-kernel-classifiers
#10
Guibiao Xu, Bao-Gang Hu, Jose C Principe
The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055923/supervised-discrete-hashing-with-relaxation
#11
Jie Gui, Tongliang Liu, Zhenan Sun, Dacheng Tao, Tieniu Tan
Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called ''supervised discrete hashing with relaxation'' (SDHR) based on ''supervised discrete hashing'' (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055922/kernel-based-multilayer-extreme-learning-machines-for-representation-learning
#12
Chi Man Wong, Chi Man Vong, Pak Kin Wong, Jiuwen Cao
Recently, multilayer extreme learning machine (ML-ELM) was applied to stacked autoencoder (SAE) for representation learning. In contrast to traditional SAE, the training time of ML-ELM is significantly reduced from hours to seconds with high accuracy. However, ML-ELM suffers from several drawbacks: 1) manual tuning on the number of hidden nodes in every layer is an uncertain factor to training time and generalization; 2) random projection of input weights and bias in every layer of ML-ELM leads to suboptimal model generalization; 3) the pseudoinverse solution for output weights in every layer incurs relatively large reconstruction error; and 4) the storage and execution time for transformation matrices in representation learning are proportional to the number of hidden layers...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055921/terminal-sliding-mode-based-consensus-tracking-control-for-networked-uncertain-mechanical-systems-on-digraphs
#13
Gang Chen, Yongduan Song, Yanfeng Guan
This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055920/robust-dlpp-with-nongreedy-%C3%A2-%C3%A2-norm-minimization-and-maximization
#14
Qianqian Wang, Quanxue Gao, Deyan Xie, Xinbo Gao, Yong Wang
Recently, discriminant locality preserving projection based on L1-norm (DLPP-L1) was developed for robust subspace learning and image classification. It obtains projection vectors by greedy strategy, i.e., all projection vectors are optimized individually through maximizing the objective function. Thus, the obtained solution does not necessarily best optimize the corresponding trace ratio optimization algorithm, which is the essential objective function for general dimensionality reduction. It results in insufficient recognition accuracy...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055919/optimal-switching-of-dc-dc-power-converters-using-approximate-dynamic-programming
#15
Ali Heydari
Optimal switching between different topologies in step-down dc-dc voltage converters, with nonideal inductors and capacitors, is investigated in this paper. Challenges including constraint on the inductor current and voltage leakages across the capacitor (due to switching) are incorporated. The objective is generating the desired voltage with low ripples and high robustness toward line and load disturbances. A previously developed tool, which is based on approximate dynamic programming, is adapted for this application...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055918/stability-of-rotor-hopfield-neural-networks-with-synchronous-mode
#16
Masaki Kobayashi
A complex-valued Hopfield neural network (CHNN) is a model of a Hopfield neural network using multistate neurons. The stability conditions of CHNNs have been widely studied. A CHNN with a synchronous mode will converge to a fixed point or a cycle of length 2. A rotor Hopfield neural network (RHNN) is also a model of a multistate Hopfield neural network. RHNNs have much higher storage capacity and noise tolerance than CHNNs. We extend the theories regarding the stability of CHNNs to RHNNs. In addition, we investigate the stability of RHNNs with the projection rule...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28055917/dissipativity-analysis-for-stochastic-memristive-neural-networks-with-time-varying-delays-a-discrete-time-case
#17
Sanbo Ding, Zhanshan Wang, Huaguang Zhang
In this paper, the dissipativity problem of discrete-time memristive neural networks (DMNNs) with time-varying delays and stochastic perturbation is investigated. A class of logical switched functions are put forward to reflect the memristor-based switched property of connection weights, and the DMNNs are then recast into a tractable model. Based on the tractable model, the robust analysis method and Refined Jensen-based inequalities are applied to establish some sufficient conditions that ensure the (Q, S, R)-ɣ-disspativity of DMNNs...
December 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28026790/stabilization-of-neural-network-based-control-systems-via-event-triggered-control-with-nonperiodic-sampled-data
#18
Songlin Hu, Dong Yue, Xiangpeng Xie, Yong Ma, Xiuxia Yin
This paper focuses on a problem of event-triggered stabilization for a class of nonuniformly sampled neural-network-based control systems (NNBCSs). First, a new event-triggered data transmission mechanism is designed based on the nonperiodic sampled data. Different from the previous works, the proposed triggering scheme enables the NNBCSs design to enjoy the advantages of both nonuniform and event-triggered sampling schemes. Second, under the nonperiodic event-triggered data transmission scheme, the nonperiodic sampled-data three-layer fully connected feedforward neural-network (TLFCFFNN)-based event-triggered controller is constructed, and the resulting closed-loop TLFCFFNN-based event-triggered control system is modeled as a state delay system based on time-delay system modeling approach...
December 26, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28026789/a-deep-convolutional-coupling-network-for-change-detection-based-on-heterogeneous-optical-and-radar-images
#19
Jia Liu, Maoguo Gong, Kai Qin, Puzhao Zhang
We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers...
December 22, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28026788/synchronization-of-general-chaotic-neural-networks-with-nonuniform-sampling-and-packet-missing-a-switched-system-approach
#20
Renquan Lu, Peng Shi, Hongye Su, Zheng-Guang Wu, Jianquan Lu
This paper is concerned with the exponential synchronization issue of general chaotic neural networks subject to nonuniform sampling and control packet missing in the frame of the zero-input strategy. Based on this strategy, we make use of the switched system model to describe the synchronization error system. First, when the missing of control packet does not occur, an exponential stability criterion with less conservatism is established for the resultant synchronization error systems via a superior time-dependent Lyapunov functional and the convex optimization approach...
December 22, 2016: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"