journal
Journals IEEE Transactions on Neural Ne...

IEEE Transactions on Neural Networks and Learning Systems

https://read.qxmd.com/read/38652631/relation-aware-heterogeneous-graph-network-for-learning-intermodal-semantics-in-textbook-question-answering
#1
JOURNAL ARTICLE
Sai Zhang, Yunjie Wu, Xiaowang Zhang, Zhiyong Feng, Liang Wan, Zhiqiang Zhuang
Textbook question answering (TQA) task aims to infer answers for given questions from a multimodal context, including text and diagrams. The existing studies have aggregated intramodal semantics extracted from a single modality but have yet to capture the intermodal semantics between different modalities. A major challenge in learning intermodal semantics is maintaining lossless intramodal semantics while bridging the gap of semantics caused by heterogeneity. In this article, we propose an intermodal relation-aware heterogeneous graph network (IMR-HGN) to extract the intermodal semantics for TQA, which aggregates different modalities while learning features rather than representing them independently...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652630/new-bounds-on-the-accuracy-of-majority-voting-for-multiclass-classification
#2
JOURNAL ARTICLE
Sina Aeeneh, Nikola Zlatanov, Jiangshan Yu
Majority voting is a simple mathematical function that returns the most frequently occurring value within a given set. As a popular decision fusion technique (DFT), the majority voting function (MVF) finds applications in resolving conflicts, where several independent voters report their opinions on a classification problem. Despite its importance and its various applications in ensemble learning, data crowdsourcing, remote sensing, and data oracles for blockchains, the accuracy of the MVF for the general multiclass classification problem has remained unknown...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652629/geometric-matching-for-cross-modal-retrieval
#3
JOURNAL ARTICLE
Zheng Wang, Zhenwei Gao, Yang Yang, Guoqing Wang, Chengbo Jiao, Heng Tao Shen
Despite its significant progress, cross-modal retrieval still suffers from one-to-many matching cases, where the multiplicity of semantic instances in another modality could be acquired by a given query. However, existing approaches usually map heterogeneous data into the learned space as deterministic point vectors. In spite of their remarkable performance in matching the most similar instance, such deterministic point embedding suffers from the insufficient representation of rich semantics in one-to-many correspondence...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652628/multiobjective-evolutionary-learning-for-multitask-quality-prediction-problems-in-continuous-annealing-process
#4
JOURNAL ARTICLE
Chang Liu, Lixin Tang, Kainan Zhang, Xuanqi Xu
In industrial production processes, the mechanical properties of materials will directly determine the stability and consistency of product quality. However, detecting the current mechanical property is time-consuming and labor-intensive, and the material quality cannot be controlled in time. To achieve high-quality steel materials, developing a novel intelligent manufacturing technology that can satisfy multitask predictions for material properties has become a new research trend. This article proposes a multiobjective evolutionary learning method based on a two-stage model with topological sparse autoencoder (TSAE) and ensemble learning...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652627/robust-federated-learning-maximum-correntropy-aggregation-against-byzantine-attacks
#5
JOURNAL ARTICLE
Zhirong Luan, Wenrui Li, Meiqin Liu, Badong Chen
As an emerging decentralized machine learning technique, federated learning organizes collaborative training and preserves the privacy and security of participants. However, untrustworthy devices, typically Byzantine attackers, pose a significant challenge to federated learning since they can upload malicious parameters to corrupt the global model. To defend against such attacks, we propose a novel robust aggregation method-maximum correntropy aggregation (MCA), which applies the maximum correntropy criterion (MCC) to derive a central value from parameters...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652626/select-your-own-counterparts-self-supervised-graph-contrastive-learning-with-positive-sampling
#6
JOURNAL ARTICLE
Zehong Wang, Donghua Yu, Shigen Shen, Shichao Zhang, Huawen Liu, Shuang Yao, Maozu Guo
Contrastive learning (CL) has emerged as a powerful approach for self-supervised learning. However, it suffers from sampling bias, which hinders its performance. While the mainstream solutions, hard negative mining (HNM) and supervised CL (SCL), have been proposed to mitigate this critical issue, they do not effectively address graph CL (GCL). To address it, we propose graph positive sampling (GPS) and three contrastive objectives. The former is a novel learning paradigm designed to leverage the inherent properties of graphs for improved GCL models, which utilizes four complementary similarity measurements, including node centrality, topological distance, neighborhood overlapping, and semantic distance, to select positive counterparts for each node...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652625/deep-probabilistic-principal-component-analysis-for-process-monitoring
#7
JOURNAL ARTICLE
Xiangyin Kong, Yimeng He, Zhihuan Song, Tong Liu, Zhiqiang Ge
Probabilistic latent variable models (PLVMs), such as probabilistic principal component analysis (PPCA), are widely employed in process monitoring and fault detection of industrial processes. This article proposes a novel deep PPCA (DePPCA) model, which has the advantages of both probabilistic modeling and deep learning. The construction of DePPCA includes a greedy layer-wise pretraining phase and a unified end-to-end fine-tuning phase. The former establishes a hierarchical deep structure based on cascading multiple layers of the PPCA module to extract high-level features...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652624/multiscale-deep-learning-for-detection-and-recognition-a-comprehensive-survey
#8
JOURNAL ARTICLE
Licheng Jiao, Mengjiao Wang, Xu Liu, Lingling Li, Fang Liu, Zhixi Feng, Shuyuan Yang, Biao Hou
Recently, the multiscale problem in computer vision has gradually attracted people's attention. This article focuses on multiscale representation for object detection and recognition, comprehensively introduces the development of multiscale deep learning, and constructs an easy-to-understand, but powerful knowledge structure. First, we give the definition of scale, explain the multiscale mechanism of human vision, and then lead to the multiscale problem discussed in computer vision. Second, advanced multiscale representation methods are introduced, including pyramid representation, scale-space representation, and multiscale geometric representation...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652623/zs-vat-learning-unbiased-attribute-knowledge-for-zero-shot-recognition-through-visual-attribute-transformer
#9
JOURNAL ARTICLE
Zongyan Han, Zhenyong Fu, Shuo Chen, Le Hui, Guangyu Li, Jian Yang, Chang Wen Chen
In zero-shot learning (ZSL), attribute knowledge plays a vital role in transferring knowledge from seen classes to unseen classes. However, most existing ZSL methods learn biased attribute knowledge, which usually results in biased attribute prediction and a decline in zero-shot recognition performance. To solve this problem and learn unbiased attribute knowledge, we propose a visual attribute Transformer for zero-shot recognition (ZS-VAT), which is an effective and interpretable Transformer designed specifically for ZSL...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652622/toward-efficient-convolutional-neural-networks-with-structured-ternary-patterns
#10
JOURNAL ARTICLE
Christos Kyrkou
High-efficiency deep learning (DL) models are necessary not only to facilitate their use in devices with limited resources but also to improve resources required for training. Convolutional neural networks (ConvNets) typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. This brief presents work toward utilizing static convolutional filters generated from the space of local binary patterns (LBPs) and Haar features to design efficient ConvNet architectures...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38652621/dual-channel-adaptive-scale-hypergraph-encoders-with-cross-view-contrastive-learning-for-knowledge-tracing
#11
JOURNAL ARTICLE
Jiawei Li, Yuanfei Deng, Yixiu Qin, Shun Mao, Yuncheng Jiang
Knowledge tracing (KT) refers to predicting learners' performance in the future according to their historical responses, which has become an essential task in intelligent tutoring systems. Most deep learning-based methods usually model the learners' knowledge states via recurrent neural networks (RNNs) or attention mechanisms. Recently emerging graph neural networks (GNNs) assist the KT model to capture the relationships such as question-skill and question-learner. However, non-pairwise and complex higher-order information among responses is ignored...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648135/self-supervised-temporal-graph-learning-with-temporal-and-structural-intensity-alignment
#12
JOURNAL ARTICLE
Meng Liu, Ke Liang, Yawei Zhao, Wenxuan Tu, Sihang Zhou, Xinbiao Gan, Xinwang Liu, Kunlun He
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information, which has recently garnered increasing attention. In contrast to static graphs, temporal graphs are typically organized as node interaction sequences over continuous time rather than an adjacency matrix. Most temporal graph learning methods model current interactions by incorporating historical neighborhood. However, such methods only consider first-order temporal information while disregarding crucial high-order structural information, resulting in suboptimal performance...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648134/boosting-reinforcement-learning-via-hierarchical-game-playing-with-state-relay
#13
JOURNAL ARTICLE
Chanjuan Liu, Jinmiao Cong, Guangyuan Liu, Guifei Jiang, Xirong Xu, Enqiang Zhu
Due to its wide application, deep reinforcement learning (DRL) has been extensively studied in the motion planning community in recent years. However, in the current DRL research, regardless of task completion, the state information of the agent will be reset afterward. This leads to a low sample utilization rate and hinders further explorations of the environment. Moreover, in the initial training stage, the agent has a weak learning ability in general, which affects the training efficiency in complex tasks...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648133/general-hyperspectral-image-super-resolution-via-meta-transfer-learning
#14
JOURNAL ARTICLE
Yingsong Cheng, Xinya Wang, Yong Ma, Xiaoguang Mei, Minghui Wu, Jiayi Ma
Recent advances in deep learning-based methods have led to significant progress in the hyperspectral super-resolution (SR). However, the scarcity and the high dimension of data have hindered further development since deep models require sufficient data to learn stable patterns. Moreover, the huge domain differences between hyperspectral image (HSI) datasets pose a significant challenge in generalizability. To address these problems, we present a general hyperspectral SR framework via meta-transfer learning (MTL)...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648132/foreground-capture-feature-pyramid-network-oriented-object-detection-in-complex-backgrounds
#15
JOURNAL ARTICLE
Honggui Han, Qiyu Zhang, Fangyu Li, Yongping Du
Feature pyramids are widely adopted in visual detection models for capturing multiscale features of objects. However, the utilization of feature pyramids in practical object detection tasks is prone to complex background interference, resulting in suboptimal capture of discriminative multiscale foreground semantic features. In this article, a foreground capture feature pyramid network (FCFPN) for multiscale object detection is proposed, to address the problem of inadequate feature learning in complex backgrounds...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648131/satf-a-scalable-attentive-transfer-framework-for-efficient-multiagent-reinforcement-learning
#16
JOURNAL ARTICLE
Bin Chen, Zehong Cao, Quan Bai
It is challenging to train an efficient learning procedure with multiagent reinforcement learning (MARL) when the number of agents increases as the observation space exponentially expands, especially in large-scale multiagent systems. In this article, we proposed a scalable attentive transfer framework (SATF) for efficient MARL, which achieved goals faster and more accurately in homogeneous and heterogeneous combat tasks by transferring learned knowledge from a small number of agents (4) to a large number of agents (up to 64)...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648130/multichannel-orthogonal-transform-based-perceptron-layers-for-efficient-resnets
#17
JOURNAL ARTICLE
Hongyi Pan, Emadeldeen Hamdan, Xin Zhu, Salih Atici, Ahmet Enis Cetin
In this article, we propose a set of transform-based neural network layers as an alternative to the [Formula: see text] Conv2D layers in convolutional neural networks (CNNs). The proposed layers can be implemented based on orthogonal transforms, such as the discrete cosine transform (DCT), Hadamard transform (HT), and biorthogonal block wavelet transform (BWT). Furthermore, by taking advantage of the convolution theorems, convolutional filtering operations are performed in the transform domain using elementwise multiplications...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648129/expected-policy-gradient-for-network-aggregative-markov-games-in-continuous-space
#18
JOURNAL ARTICLE
Alireza Ramezani Moghaddam, Hamed Kebriaei
In this article, we investigate the Nash-seeking problem of a set of agents, playing an infinite network aggregative Markov game. In particular, we focus on a noncooperative framework where each agent selfishly aims at maximizing its long-term average reward without having explicit information on the model of the environment dynamics and its own reward function. The main contribution of this article is to develop a continuous multiagent reinforcement learning (MARL) algorithm for the Nash-seeking problem in infinite dynamic games with convergence guarantee...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648128/finite-time-consensus-adaptive-neural-network-control-for-nonlinear-multiagent-systems-under-pde-models
#19
JOURNAL ARTICLE
Yan-Jun Liu, Xuebin Shang, Li Tang, Sai Zhang
In this article, a novel adaptive control method based on neural networks is proposed for a class of multiagent systems (MASs) with nonlinear functions and external disturbances. First, the approximation properties of neural networks are used to approximate the MAS partial differential equation (PDE) model with nonlinear terms containing two variables, time t, and spatial variable x. Second, an adaptive controller is constructed to actuate the parabolic MAS to reach consensus under external disturbances. Based on this, the finite-time theorem and special inequalities are applied to prove the stability of the closed-loop system...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648127/size-and-depth-of-monotone-neural-networks-interpolation-and-approximation
#20
JOURNAL ARTICLE
Dan Mikulincer, Daniel Reichman
We study monotone neural networks with threshold gates where all the weights (other than the biases) are nonnegative. We focus on the expressive power and efficiency of the representation of such networks. Our first result establishes that every monotone function over [0,1]d can be approximated within arbitrarily small additive error by a depth-4 monotone network. When , we improve upon the previous best-known construction, which has a depth of d+1 . Our proof goes by solving the monotone interpolation problem for monotone datasets using a depth-4 monotone threshold network...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
1
2
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.