journal
MENU ▼
Read by QxMD icon Read
search

IEEE Transactions on Neural Networks and Learning Systems

journal
https://www.readbyqxmd.com/read/28534793/an-information-theoretic-cluster-visualization-for-self-organizing-maps
#1
Leonardo Enzo Brito da Silva, Donald C Wunsch
Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions...
May 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534792/neural-ailc-for-error-tracking-against-arbitrary-initial-shifts
#2
Mingxuan Sun, Tao Wu, Lejian Chen, Guofeng Zhang
This paper concerns with the adaptive iterative learning control using neural networks for systems performing repetitive tasks over a finite time interval. Two standing issues of such iterative learning control processes are addressed: one is the initial condition problem and the other is that related to the approximation error. Instead of the state tracking, an error tracking approach is proposed to tackle the problem arising from arbitrary initial shifts. The desired error trajectory is prespecified at the design stage, suitable to different tracking tasks...
May 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534790/neighborhood-based-stopping-criterion-for-contrastive-divergence
#3
Enrique Romero Merino, Ferran Mazzanti Castrillejo, Jordi Delgado Pin
Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions. RBMs are often trained using the Contrastive Divergence (CD) learning algorithm, an approximation to the gradient of the data log-likelihood (logL). A simple reconstruction error is often used as a stopping criterion for CD, although several authors have raised doubts concerning the feasibility of this procedure. In many cases, the evolution curve of the reconstruction error is monotonic, while the logL is not, thus indicating that the former is not a good estimator of the optimal stopping point for learning...
May 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534788/rankmap-a-framework-for-distributed-learning-from-dense-data-sets
#4
Azalia Mirhoseini, Eva L Dyer, Ebrahim M Songhori, Richard Baraniuk, Farinaz Koushanfar
This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense data sets. Our framework exploits data structure to scalably factorize it into an ensemble of lower rank subspaces. The factorization creates sparse low-dimensional representations of the data, a property which is leveraged to devise effective mapping and scheduling of iterative learning algorithms on the distributed computing machines. We provide two APIs, one matrix-based and one graph-based, which facilitate automated adoption of the framework for performing several contemporary learning applications...
May 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534794/multiclass-learning-with-partially-corrupted-labels
#5
Ruxin Wang, Tongliang Liu, Dacheng Tao
Traditional classification systems rely heavily on sufficient training data with accurate labels. However, the quality of the collected data depends on the labelers, among which inexperienced labelers may exist and produce unexpected labels that may degrade the performance of a learning system. In this paper, we investigate the multiclass classification problem where a certain amount of training examples are randomly labeled. Specifically, we show that this issue can be formulated as a label noise problem. To perform multiclass classification, we employ the widely used importance reweighting strategy to enable the learning on noisy data to more closely reflect the results on noise-free data...
May 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534791/a-self-paced-regularization-framework-for-multilabel-learning
#6
Changsheng Li, Fan Wei, Junchi Yan, Xiaoyu Zhang, Qingshan Liu, Hongyuan Zha
In this brief, we propose a novel multilabel learning framework, called multilabel self-paced learning, in an attempt to incorporate the SPL scheme into the regime of multilabel learning. Specifically, we first propose a new multilabel learning formulation by introducing a self-paced function as a regularizer, so as to simultaneously prioritize label learning tasks and instances in each iteration. Considering that different multilabel learning scenarios often need different self-paced schemes during learning, we thus provide a general way to find the desired self-paced functions...
May 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28534789/boundary-eliminated-pseudoinverse-linear-discriminant-for-imbalanced-problems
#7
Yujin Zhu, Zhe Wang, Hongyuan Zha, Daqi Gao
Existing learning models for classification of imbalanced data sets can be grouped as either boundary-based or nonboundary-based depending on whether a decision hyperplane is used in the learning process. The focus of this paper is a new approach that leverages the advantage of both approaches. Specifically, our new model partitions the input space into three parts by creating two additional boundaries in the training process, and then makes the final decision based on a heuristic measurement between the test sample and a subset of selected training samples...
May 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28504952/exponential-synchronization-of-networked-chaotic-delayed-neural-network-by-a-hybrid-event-trigger-scheme
#8
Zhongyang Fei, Chaoxu Guan, Huijun Gao
This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon...
May 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28504950/graph-regularized-restricted-boltzmann-machine
#9
Dongdong Chen, Jiancheng Lv, Zhang Yi
The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space...
May 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28504949/cascaded-subpatch-networks-for-effective-cnns
#10
Xiaoheng Jiang, Yanwei Pang, Manli Sun, Xuelong Li
Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size Hx W (typically, H is small and is equal to W, e.g., H is 5 or 7 ). Generally, the size of the filter is equal to the size Hx W of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size hx w is smaller than Hx W . The proposed subpatch filter consists of two subsequent filters...
May 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28504948/substructural-regularization-with-data-sensitive-granularity-for-sequence-transfer-learning
#11
Shichang Sun, Hongbo Liu, Jiana Meng, C L Philip Chen, Yu Yang
Sequence transfer learning is of interest in both academia and industry with the emergence of numerous new text domains from Twitter and other social media tools. In this paper, we put forward the data-sensitive granularity for transfer learning, and then, a novel substructural regularization transfer learning model (STLM) is proposed to preserve target domain features at substructural granularity in the light of the condition of labeled data set size. Our model is underpinned by hidden Markov model and regularization theory, where the substructural representation can be integrated as a penalty after measuring the dissimilarity of substructures between target domain and STLM with relative entropy...
May 12, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28504951/a-sequential-learning-approach-for-scaling-up-filter-based-feature-subset-selection
#12
Gregory Ditzler, Robi Polikar, Gail Rosen
Increasingly, many machine learning applications are now associated with very large data sets whose sizes were almost unimaginable just a short time ago. As a result, many of the current algorithms cannot handle, or do not scale to, today's extremely large volumes of data. Fortunately, not all features that make up a typical data set carry information that is relevant or useful for prediction, and identifying and removing such irrelevant features can significantly reduce the total data size. The unfortunate dilemma, however, is that some of the current data sets are so large that common feature selection algorithms-whose very goal is to reduce the dimensionality-cannot handle such large data sets, creating a vicious cycle...
May 11, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28500013/new-splitting-criteria-for-decision-trees-in-stationary-data-streams
#13
Maciej Jaworski, Piotr Duda, Leszek Rutkowski
The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality...
May 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28500012/memcomputing-numerical-inversion-with-self-organizing-logic-gates
#14
Haik Manukian, Fabio L Traversa, Massimiliano Di Ventra
We propose to use digital memcomputing machines (DMMs), implemented with self-organizing logic gates (SOLGs), to solve the problem of numerical inversion. Starting from fixed-point scalar inversion, we describe the generalization to solving linear systems and matrix inversion. This method, when realized in hardware, will output the result in only one computational step. As an example, we perform simulations of the scalar case using a 5-bit logic circuit made of SOLGs, and show that the circuit successfully performs the inversion...
May 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28500010/robust-latent-subspace-learning-for-image-classification
#15
Xiaozhao Fang, Shaohua Teng, Zhihui Lai, Zhaoshui He, Shengli Xie, Wai Keung Wong
This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance...
May 10, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28500011/improved-stability-and-stabilization-results-for-stochastic-synchronization-of-continuous-time-semi-markovian-jump-neural-networks-with-time-varying-delay
#16
Yanling Wei, Ju H Park, Hamid Reza Karimi, Yu-Chu Tian, Hoyoul Jung
Continuous-time semi-Markovian jump neural networks (semi-MJNNs) are those MJNNs whose transition rates are not constant but depend on the random sojourn time. Addressing stochastic synchronization of semi-MJNNs with time-varying delay, an improved stochastic stability criterion is derived in this paper to guarantee stochastic synchronization of the response systems with the drive systems. This is achieved through constructing a semi-Markovian Lyapunov-Krasovskii functional together as well as making use of a novel integral inequality and the characteristics of cumulative distribution functions...
May 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28500009/end-to-end-feature-aware-label-space-encoding-for-multilabel-classification-with-many-classes
#17
Zijia Lin, Guiguang Ding, Jungong Han, Ling Shao
To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decoding process...
May 9, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28489555/distributed-adaptive-containment-control-for-a-class-of-nonlinear-multiagent-systems-with-input-quantization
#18
Chenliang Wang, Changyun Wen, Qinglei Hu, Wei Wang, Xiuyu Zhang
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown...
May 5, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28489554/reversed-spectral-hashing
#19
Qingshan Liu, Guangcan Liu, Lai Li, Xiao-Tong Yuan, Meng Wang, Wei Liu
Hashing is emerging as a powerful tool for building highly efficient indices in large-scale search systems. In this paper, we study spectral hashing (SH), which is a classical method of unsupervised hashing. In general, SH solves for the hash codes by minimizing an objective function that tries to preserve the similarity structure of the data given. Although computationally simple, very often SH performs unsatisfactorily and lags distinctly behind the state-of-the-art methods. We observe that the inferior performance of SH is mainly due to its imperfect formulation; that is, the optimization of the minimization problem in SH actually cannot ensure that the similarity structure of the high-dimensional data is really preserved in the low-dimensional hash code space...
May 5, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28489553/data-driven-learning-control-for-stochastic-nonlinear-systems-multiple-communication-constraints-and-limited-storage
#20
Dong Shen
This paper proposes a data-driven learning control method for stochastic nonlinear systems under random communication conditions, including data dropouts, communication delays, and packet transmission disordering. A renewal mechanism is added to the buffer to regulate the arrived packets, and a recognition mechanism is introduced to the controller for the selection of suitable update packets. Both intermittent and successive update schemes are proposed based on the conventional P-type iterative learning control algorithm, and are shown to converge to the desired input with probability one...
May 5, 2017: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"