Read by QxMD icon Read

IEEE Transactions on Neural Networks and Learning Systems

Jing Zhang, Victor S Sheng, Tao Li, Xindong Wu
Crowdsourcing systems provide a cost effective and convenient way to collect labels, but they often fail to guarantee the quality of the labels. This paper proposes a novel framework that introduces noise correction techniques to further improve the quality of integrated labels that are inferred from the multiple noisy labels of objects. In the proposed general framework, information about the qualities of labelers estimated by a front-end ground truth inference algorithm is utilized to supervise subsequent label noise filtering and correction...
March 22, 2017: IEEE Transactions on Neural Networks and Learning Systems
Jianyong Wang, Lei Zhang, Quan Guo, Zhang Yi
Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation...
March 21, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yuhu Cheng, Xue Qiao, Xuesong Wang, Qiang Yu
For the zero-shot image classification with relative attributes (RAs), the traditional method requires that not only all seen and unseen images obey Gaussian distribution, but also the classifications on testing samples are made by maximum likelihood estimation. We therefore propose a novel zero-shot image classifier called random forest based on relative attribute. First, based on the ordered and unordered pairs of images from the seen classes, the idea of ranking support vector machine is used to learn ranking functions for attributes...
March 21, 2017: IEEE Transactions on Neural Networks and Learning Systems
Narges Armanfard, James P Reilly, Majid Komeili
Conventional feature selection algorithms assign a single common feature set to all regions of the sample space. In contrast, this paper proposes a novel algorithm for localized feature selection for which each region of the sample space is characterized by its individual distinct feature subset that may vary in size and membership. This approach can therefore select an optimal feature subset that adapts to local variations of the sample space, and hence offer the potential for improved performance. Feature subsets are computed by choosing an optimal coordinate space so that, within a localized region, within-class distances and between-class distances are, respectively, minimized and maximized...
March 21, 2017: IEEE Transactions on Neural Networks and Learning Systems
Qiuwen Chen, Ryan Luley, Qing Wu, Morgan Bishop, Richard W Linderman, Qinru Qiu
The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic...
March 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
Qin Zhang, Quanying Yao
The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced...
March 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
Rahul Kumar Agarwal, Ikhlaq Hussain, Bhim Singh
This paper proposes an application of a least mean-square (LMS)-based neural network (NN) structure for the power quality improvement of a three-phase power distribution network under abnormal conditions. It uses a single-layer neuron structure for the control in a distribution static compensator (DSTATCOM) to attenuate the harmonics such as noise, bias, notches, dc offset, and distortion, injected in the grid current due to connection of several nonlinear loads. This admittance LMS-based NN structure has a simple architecture which reduces the computational complexity and burden which makes it easy to implement...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Yanwei Pang, Manli Sun, Xiaoheng Jiang, Xuelong Li
Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Alaeddin Malek, Najmeh Hosseinipour-Mahani
In this paper, a neural network model for solving a class of multiextremal smooth nonconvex constrained optimization problems is proposed. Neural network is designed in such a way that its equilibrium points coincide with the local and global optimal solutions of the corresponding optimization problem. Based on the suitable underestimators for the Lagrangian of the problem, one give geometric criteria for an equilibrium point to be a global minimizer of multiextremal constrained optimization problem with or without bounds on the variables...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xiaozhao Zhao, Yuexian Hou, Dawei Song, Wenjie Li
Typical dimensionality reduction (DR) methods are data-oriented, focusing on directly reducing the number of random variables (or features) while retaining the maximal variations in the high-dimensional data. Targeting unsupervised situations, this paper aims to address the problem from a novel perspective and considers model-oriented DR in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called confident-information-first (CIF) principle, to maximally preserve confident parameters and rule out less confident ones...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xiantong Zhen, Mengyang Yu, Feng Zheng, Ilanit Ben Nachum, Mousumi Bhaduri, David Laidley, Shuo Li
Multitarget regression has recently generated intensive popularity due to its ability to simultaneously solve multiple regression tasks with improved performance, while great challenges stem from jointly exploring inter-target correlations and input-output relationships. In this paper, we propose multitarget sparse latent regression (MSLR) to simultaneously model intrinsic intertarget correlations and complex nonlinear input-output relationships in one single framework. By deploying a structure matrix, the MSLR accomplishes a latent variable model which is able to explicitly encode intertarget correlations via ℓ2,1-norm-based sparse learning; the MSLR naturally admits a representer theorem for kernel extension, which enables it to flexibly handle highly complex nonlinear input-output relationships; the MSLR can be solved efficiently by an alternating optimization algorithm with guaranteed convergence, which ensures efficient multitarget regression...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
Wei He, Tingting Meng, Deqing Huang, Xuefang Li
This paper addresses the vibration control and the input constraint for an Euler-Bernoulli beam system under aperiodic distributed disturbance and aperiodic boundary disturbance. Hyperbolic tangent functions and saturation functions are adopted to tackle the input constraint. A restrained adaptive boundary iterative learning control (ABILC) law is proposed based on a time-weighted Lyapunov-Krasovskii-like composite energy function. In order to deal with the uncertainty of a system parameter and reject the external disturbances, three adaptive laws are designed and learned in the iteration domain...
March 15, 2017: IEEE Transactions on Neural Networks and Learning Systems
Hao Zhang, Yin Sheng, Zhigang Zeng
This paper investigates the synchronization issue of coupled reaction-diffusion neural networks with directed topology via an adaptive approach. Due to the complexity of the network structure and the presence of space variables, it is difficult to design proper adaptive strategies on coupling weights to accomplish the synchronous goal. Under the assumptions of two kinds of special network structures, that is, directed spanning path and directed spanning tree, some novel edge-based adaptive laws, which utilized the local information of node dynamics fully are designed on the coupling weights for reaching synchronization...
March 15, 2017: IEEE Transactions on Neural Networks and Learning Systems
Xuhui Bu, Zhongsheng Hou, Hongwei Zhang
This paper investigates the data-driven consensus tracking problem for multiagent systems with both fixed communication topology and switching topology by utilizing a distributed model free adaptive control (MFAC) method. Here, agent's dynamics are described by unknown nonlinear systems and only a subset of followers can access the desired trajectory. The dynamical linearization technique is applied to each agent based on the pseudo partial derivative, and then, a distributed MFAC algorithm is proposed to ensure that all agents can track the desired trajectory...
March 14, 2017: IEEE Transactions on Neural Networks and Learning Systems
Pratik Prabhanjan Brahma, Yiyuan She, Shijie Li, Jiade Li, Dapeng Wu
High-dimensional data present in the real world is often corrupted by noise and gross outliers. Principal component analysis (PCA) fails to learn the true low-dimensional subspace in such cases. This is the reason why robust versions of PCA, which put a penalty on arbitrarily large outlying entries, are preferred to perform dimension reduction. In this paper, we argue that it is necessary to study the presence of outliers not only in the observed data matrix but also in the orthogonal complement subspace of the authentic principal subspace...
March 14, 2017: IEEE Transactions on Neural Networks and Learning Systems
Shenglan Liu, Lin Feng, Yang Liu, Hong Qiao, Jun Wu, Wei Wang
Human action segmentation is important for human action analysis, which is a highly active research area. Most segmentation methods are based on clustering or numerical descriptors, which are only related to data, and consider no relationship between the data and physical characteristics of human actions. Physical characteristics of human motions are those that can be directly perceived by human beings, such as speed, acceleration, continuity, and so on, which are quite helpful in detecting human motion segment points...
March 8, 2017: IEEE Transactions on Neural Networks and Learning Systems
Haihua Liu, Na Shu, Qiling Tang, Wensheng Zhang
In this paper, we propose a bioinspired model for human action recognition through modeling neural mechanisms of information processing in two visual cortical areas: the primary visual cortex (V1) and the middle temporal cortex (MT) dedicated to motion. This model, named V1-MT, is composed of V1 and MT models (layers) corresponding to their cortical areas, which are built with layered spiking neural networks (SNNs). Some neuron properties in V1 and MT, such as direction and speed selectivity, spatiotemporal inseparability, and center surround suppression, are integrated into SNNs...
March 8, 2017: IEEE Transactions on Neural Networks and Learning Systems
Lok-Won Kim
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs...
March 8, 2017: IEEE Transactions on Neural Networks and Learning Systems
Junxiu Liu, Jim Harkin, Liam P Maguire, Liam J McDaid, John J Wade
Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain...
March 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Juntao Fei, Cheng Lu
In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop...
March 6, 2017: IEEE Transactions on Neural Networks and Learning Systems
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"