Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Yue-Jiao Gong, Jun Zhang, Yicong Zhou
Most learning methods contain optimization as a substep, where the nondifferentiability and multimodality of objectives push forward the interplay of evolutionary optimization algorithms and machine learning models. The recently emerged evolutionary multimodal optimization (MMOP) technique enables the learning of diverse sets of effective parameters for the models simultaneously, providing new opportunities to the applications requiring both accuracy and diversity, such as ensemble, interactive, and interpretive learning...
June 20, 2017: IEEE Transactions on Neural Networks and Learning Systems
Jiaming Zhu, Zhiqiang Cao, Tianping Zhang, Yuequan Yang, Yang Yi
In this brief, sufficient conditions are proposed for the existence of the compact sets in the neural network controls. First, we point out that the existence of the compact set in a classical neural network control scheme is unsolved and its result is incomplete. Next, as a simple case, we derive the sufficient condition of the existence of the compact set for the neural network control of first-order systems. Finally, we propose the sufficient condition of the existence of the compact set for the neural-network-based backstepping control of high-order nonlinear systems...
June 20, 2017: IEEE Transactions on Neural Networks and Learning Systems
Liying Zhu, Zhengrong Xiang
This paper addresses the aggregation issues of competitive multiagent systems (CMASs) consisting of competitive agents with multimodes and saddle points. In such CMASs, due to existing mutual competitions, every agent is equipped with finite multimodes, and every mode in any agent is described as a second-order linear time-invariant (LTI) control system. When the origin is the same saddle point of all modes of agents, to investigate aggregation of the CMASs with switching strategies, we first use switched LTI systems with saddle points to formulate such CMASs...
June 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Zhigang Ma, Xiaojun Chang, Zhongwen Xu, Nicu Sebe, Alexander G Hauptmann
Semantic attributes have been increasingly used the past few years for multimedia event detection (MED) with promising results. The motivation is that multimedia events generally consist of lower level components such as objects, scenes, and actions. By characterizing multimedia event videos with semantic attributes, one could exploit more informative cues for improved detection results. Much existing work obtains semantic attributes from images, which may be suboptimal for video analysis since these image-inferred attributes do not carry dynamic information that is essential for videos...
June 15, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xinjiang Lu, Wenbo Liu, Chuang Zhou, Minghui Huang
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero...
June 13, 2017: IEEE Transactions on Neural Networks and Learning Systems
Giovanni Da San Martino, Nicolo Navarin, Alessandro Sperduti
The availability of graph data with node attributes that can be either discrete or real-valued is constantly increasing. While existing Kernel methods are effective techniques for dealing with graphs having discrete node labels, their adaptation to nondiscrete or continuous node attributes has been limited, mainly for computational issues. Recently, a few kernels especially tailored for this domain, and that trade predictive performance for computational efficiency, have been proposed. In this brief, we propose a graph kernel for complex and continuous nodes' attributes, whose features are tree structures extracted from specific graph visits...
June 13, 2017: IEEE Transactions on Neural Networks and Learning Systems
Weiwei Shi, Yihong Gong, Xiaoyu Tao, Nanning Zheng
In this paper, we build a multilabel image classifier using a general deep convolutional neural network (DCNN). We propose a novel objective function that consists of three parts, i.e., max-margin objective, max-correlation objective, and correntropy loss. The max-margin objective explicitly enforces that the minimum score of positive labels must be larger than the maximum score of negative labels by a predefined margin, which not only improves accuracies of the multilabel classifier, but also eases the threshold determination...
June 13, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fangfei Li, Huaicheng Yan, Hamid Reza Karimi
This brief is concerned with the problem of a single-input pinning control design for reachability of Boolean networks (BNs). Specifically, the transition matrix of a BN is designed to steer the BN from an initial state to a desirable one. In addition, some nodes are selected as the pinning nodes by solving some logical matrix equations. Furthermore, a single-input pinning control algorithm is given. Eventually, a genetic regulatory network is provided to demonstrate the effectiveness and feasibility of the developed method...
June 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
Cristiano Cervellera, Danilo Maccio
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data...
June 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
Weiwei Shi, Yihong Gong, Xiaoyu Tao, Jinjun Wang, Nanning Zheng
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds...
June 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xuesong Zhang, Yan Zhuang, Wei Wang, Witold Pedrycz
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation...
June 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
Daniel Paul Barrett, Scott Alan Bronikowski, Haonan Yu, Jeffrey Mark Siskind
We present a unified framework which supports grounding natural-language semantics in robotic driving. This framework supports acquisition (learning grounded meanings of nouns and prepositions from human sentential annotation of robotic driving paths), generation (using such acquired meanings to generate sentential description of new robotic driving paths), and comprehension (using such acquired meanings to support automated driving to accomplish navigational goals specified in natural language). We evaluate the performance of these three tasks by having independent human judges rate the semantic fidelity of the sentences associated with paths...
June 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
Vignesh Narayanan, Sarangapani Jagannathan
This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online...
June 8, 2017: IEEE Transactions on Neural Networks and Learning Systems
Zhao-Rong Lai, Dao-Qing Dai, Chuan-Xian Ren, Ke-Kun Huang
We propose a novel linear learning system based on the peak price tracking (PPT) strategy for portfolio selection (PS). Recently, the topic of tracking control attracts intensive attention and some novel models are proposed based on backstepping methods, such that the system output tracks a desired trajectory. The proposed system has a similar evolution with a transform function that aggressively tracks the increasing power of different assets. As a result, the better performing assets will receive more investment...
June 7, 2017: IEEE Transactions on Neural Networks and Learning Systems
Marian B Gorzalczany, Filip Rudzinski
This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets...
June 7, 2017: IEEE Transactions on Neural Networks and Learning Systems
Qing Tao, Gaowei Wu, Dejun Chu
The truncated regular L₁-loss support vector machine can eliminate the excessive number of support vectors (SVs); thus, it has significant advantages in robustness and scalability. However, in this paper, we discover that the associated state-of-the-art solvers, such as difference convex algorithm and concave-convex procedure, not only have limited sparsity promoting property for general truncated losses especially the L₂-loss but also have poor scalability for large-scale problems. To circumvent these drawbacks, we present a general multistage scheme with explicit interpretation regarding SVs as well as outliers...
June 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Di-Hua Zhai, Yuanqing Xia
This paper addresses the telecoordinated control of multiple robots in the simultaneous presence of asymmetric time-varying delays, nonpassive external forces, and uncertain kinematics/dynamics. To achieve the control objective, a neuroadaptive controller with utilizing prescribed performance control and switching control technique is developed, where the basic idea is to employ the concept of motion synchronization in each pair of master-slave robots and among all slave robots. By using the multiple Lyapunov-Krasovskii functionals method, the state-independent input-to-output practical stability of the closed-loop system is established...
June 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Wentao Guo, Jennie Si, Feng Liu, Shengwei Mei
Policy iteration approximate dynamic programming (DP) is an important algorithm for solving optimal decision and control problems. In this paper, we focus on the problem associated with policy approximation in policy iteration approximate DP for discrete-time nonlinear systems using infinite-horizon undiscounted value functions. Taking policy approximation error into account, we demonstrate asymptotic stability of the control policy under our problem setting, show boundedness of the value function during each policy iteration step, and introduce a new sufficient condition for the value function to converge to a bounded neighborhood of the optimal value function...
June 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xiaofeng Chen, Qiankun Song, Zhongshan Li, Zhenjiang Zhao, Yurong Liu
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively...
June 5, 2017: IEEE Transactions on Neural Networks and Learning Systems
Hongli Dong, Nan Hou, Zidong Wang, Weijian Ren
This paper investigates the variance-constrained H∞ state estimation problem for a class of nonlinear time-varying complex networks with randomly varying topologies, stochastic inner coupling, and measurement quantization. A Kronecker delta function and Markovian jumping parameters are utilized to describe the random changes of network topologies. A Gaussian random variable is introduced to model the stochastic disturbances in the inner coupling of complex networks. As a kind of incomplete measurements, measurement quantization is taken into consideration so as to account for the signal distortion phenomenon in the transmission process...
May 23, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"