journal
MENU ▼
Read by QxMD icon Read
search

Neural Networks: the Official Journal of the International Neural Network Society

journal
https://www.readbyqxmd.com/read/30219742/the-vapnik-chervonenkis-dimension-of-graph-and-recursive-neural-networks
#1
Franco Scarselli, Ah Chung Tsoi, Markus Hagenbuchner
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classification model and it is often used as an indicator for the generalization capability of a learning method. The VC-dim has been studied on common feed-forward neural networks, but it has yet to be studied on Graph Neural Networks (GNNs) and Recursive Neural Networks (RecNNs). This paper provides upper bounds on the order of growth of the VC-dim of GNNs and RecNNs. GNNs and RecNNs are from a new class of neural network models which are capable of processing inputs that are given as graphs...
September 1, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30216871/sign-backpropagation-an-on-chip-learning-algorithm-for-analog-rram-neuromorphic-computing-systems
#2
Qingtian Zhang, Huaqiang Wu, Peng Yao, Wenqiang Zhang, Bin Gao, Ning Deng, He Qian
Currently, powerful deep learning models usually require significant resources in the form of processors and memory, which leads to very high energy consumption. The emerging resistive random access memory (RRAM) has shown great potential for constructing a scalable and energy-efficient neural network. However, it is hard to port a high-precision neural network from conventional digital CMOS hardware systems to analog RRAM systems owing to the variability of RRAM devices. A suitable on-chip learning algorithm should be developed to retrain or improve the performance of the neural network...
September 1, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30216873/distant-supervision-for-relation-extraction-with-hierarchical-selective-attention
#3
Peng Zhou, Jiaming Xu, Zhenyu Qi, Hongyun Bao, Zhineng Chen, Bo Xu
Distant supervised relation extraction is an important task in the field of natural language processing. There are two main shortcomings for most state-of-the-art methods. One is that they take all sentences of an entity pair as input, which would result in a large computational cost. But in fact, few of most relevant sentences are enough to recognize the relation of an entity pair. To tackle these problems, we propose a novel hierarchical selective attention network for relation extraction under distant supervision...
August 29, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30216869/exponential-consensus-of-discrete-time-non-linear-multi-agent-systems-via-relative-state-dependent-impulsive-protocols
#4
Yiyan Han, Chuandong Li, Zhigang Zeng, Hongfei Li
In this paper, we discuss the exponential consensus problem of discrete-time multi-agent systems with non-linear dynamics via relative state-dependent impulsive protocols. Impulsive protocols of which the impulsive instants are dependent on the weighted relative states of any two agents are introduced for general discrete-time multi-agent systems. The analysis of such impulsive protocols is transformed into an investigation on reduced fixed-time impulsive protocols by constructing a map, which is achieved mainly by a derived B-equivalence method in discrete-time domain...
August 27, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30241968/deeply-learnt-damped-least-squares-dl-dls-method-for-inverse-kinematics-of-snake-like-robots
#5
Olatunji Mumini Omisore, Shipeng Han, Lingxue Ren, Ahmed Elazab, Li Hui, Talaat Abdelhamid, Nureni Ayofe Azeez, Lei Wang
Recently, snake-like robots are proposed to assist experts during medical procedures on internal organs via natural orifices. Despite their well-spelt advantages, applications in radiosurgery is still hindered by absence of suitable designs required for spatial navigations within clustered and confined parts of human body, and inexistence of precise and fast inverse kinematics (IK) models. In this study, a deeply-learnt damped least squares method is proposed for solving IK of spatial snake-like robot. The robot's model consists of several modules, and each module has a pair of serial-links connected with orthogonal twists...
August 23, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30216872/monostable-multivibrators-as-novel-artificial-neurons
#6
Lars Keuninckx, Jan Danckaert, Guy Van der Sande
Retriggerable and non-retriggerable monostable multivibrators are simple timers with a single characteristic, their period. Motivated by the fact that monostable multivibrators are implementable in large quantities as counters in digital programmable hardware, we set out to investigate their applicability as building blocks of artificial neural networks. We derive the nonlinear input-output firing rate relations for single multivibrator neurons as well as the equilibrium firing rate of large recurrent networks...
August 23, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30216870/low-rank-and-sparse-embedding-for-dimensionality-reduction
#7
Na Han, Jigang Wu, Yingyi Liang, Xiaozhao Fang, Wai Keung Wong, Shaohua Teng
In this paper, we propose a robust subspace learning (SL) framework for dimensionality reduction which further extends the existing SL methods to a low-rank and sparse embedding (LRSE) framework from three aspects: overall optimum, robustness and generalization. Owing to the uses of low-rank and sparse constraints, both the global subspaces and local geometric structures of data are captured by the reconstruction coefficient matrix and at the same time the low-dimensional embedding of data are enforced to respect the low-rankness and sparsity...
August 18, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30199783/estimation-of-neural-connections-from-partially-observed-neural-spikes
#8
Taishi Iwasaki, Hideitsu Hino, Masami Tatsuno, Shotaro Akaho, Noboru Murata
Plasticity is one of the most important properties of the nervous system, which enables animals to adjust their behavior to the ever-changing external environment. Changes in synaptic efficacy between neurons constitute one of the major mechanisms of plasticity. Therefore, estimation of neural connections is crucial for investigating information processing in the brain. Although many analysis methods have been proposed for this purpose, most of them suffer from one or all the following mathematical difficulties: (1) only partially observed neural activity is available; (2) correlations can include both direct and indirect pseudo-interactions; and (3) biological evidence that a neuron typically has only one type of connection (excitatory or inhibitory) should be considered...
August 18, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30199782/multi-view-clustering-on-unmapped-data-via-constrained-non-negative-matrix-factorization
#9
Linlin Zong, Xianchao Zhang, Xinyue Liu
Existing multi-view clustering algorithms require that the data is completely or partially mapped between each pair of views. However, this requirement could not be satisfied in many practical settings. In this paper, we tackle the problem of multi-view clustering on unmapped data in the framework of NMF based clustering. With the help of inter-view constraints, we define the disagreement between each pair of views by the fact that the indicator vectors of two samples from two different views should be similar if they belong to the same cluster and dissimilar otherwise...
August 18, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30199781/bipartite-synchronization-in-coupled-delayed-neural-networks-under-pinning-control
#10
Fang Liu, Qiang Song, Guanghui Wen, Jinde Cao, Xinsong Yang
This paper considers the bipartite leader-following synchronization in a signed network composed by an array of coupled delayed neural networks by utilizing the pinning control strategy and M-matrix theory, where the communication links between neighboring nodes of the network can be either positive or negative. Under the assumption that the node-delay is bounded and differentiable, a sufficient condition in terms of a low-dimensional linear matrix inequality is derived for reaching bipartite leader-following synchronization in the signed network, based on which a simple algebraic formula is further given to estimate an upper bound of the node-delay...
August 18, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30173057/estimating-regional-effects-of-climate-change-and-altered-land-use-on-biosphere-carbon-fluxes-using-distributed-time-delay-neural-networks-with-bayesian-regularized-learning
#11
Andres Schmidt, Whitney Creason, Beverly E Law
The ability to accurately predict changes of the carbon and energy balance on a regional scale is of great importance for assessing the effect of land use changes on carbon sequestration under future climate conditions. Here, a suite of land cover-specific Distributed Time Delay Neural Networks with a parameter adoption algorithm optimized through Bayesian regularization was used to model the statewide atmospheric exchange of CO2 , water vapor, and energy in Oregon with its strong spatial gradients of climate and land cover...
August 16, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30173056/low-rank-representation-with-adaptive-graph-regularization
#12
Jie Wen, Xiaozhao Fang, Yong Xu, Chunwei Tian, Lunke Fei
Low-rank representation (LRR) has aroused much attention in the community of data mining. However, it has the following twoproblems which greatly limit its applications: (1) it cannot discover the intrinsic structure of data owing to the neglect of the local structure of data; (2) the obtained graph is not the optimal graph for clustering. To solve the above problems and improve the clustering performance, we propose a novel graph learning method named low-rank representation with adaptive graph regularization (LRR_AGR) in this paper...
August 14, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30173055/eeg-dipole-source-localization-with-information-criteria-for-multiple-particle-filters
#13
Sho Sonoda, Keita Nakamura, Yuki Kaneda, Hideitsu Hino, Shotaro Akaho, Noboru Murata, Eri Miyauchi, Masahiro Kawasaki
Electroencephalography (EEG) is a non-invasive brain imaging technique that describes neural electrical activation with good temporal resolution. Source localization is required for clinical and functional interpretations of EEG signals, and most commonly is achieved via the dipole model; however, the number of dipoles in the brain should be determined for a reasonably accurate interpretation. In this paper, we propose a dipole source localization (DSL) method that adaptively estimates the dipole number by using a novel information criterion...
August 14, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30177226/neural-circuits-for-learning-context-dependent-associations-of-stimuli
#14
Henghui Zhu, Ioannis Ch Paschalidis, Michael E Hasselmo
The use of reinforcement learning combined with neural networks provides a powerful framework for solving certain tasks in engineering and cognitive science. Previous research shows that neural networks have the power to automatically extract features and learn hierarchical decision rules. In this work, we investigate reinforcement learning methods for performing a context-dependent association task using two kinds of neural network models (using continuous firing rate neurons), as well as a neural circuit gating model...
August 13, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30195861/adaptive-non-negative-projective-semi-supervised-learning-for-inductive-classification
#15
Zhao Zhang, Lei Jia, Mingbo Zhao, Qiaolin Ye, Min Zhang, Meng Wang
We discuss the inductive classification problem by proposing a joint framework termed Adaptive Non-negative Projective Semi-Supervised Learning (ANP-SSL). Specifically, ANP-SSL integrates the adaptive inductive label propagation, adaptive reconstruction weights learning and the neighborhood preserving projective nonnegative matrix factorization (PNMF) explicitly. To make the label prediction results more accurate, ANP-SSL incorporates the semi-supervised data representation and classification errors into regular PNMF for minimization, which can enable our ANP-SSL to perform the adaptive weights learning and label propagation over the spatially local and part-based data representations, which differs from most existing work that usually assign weights and predict labels based on the original data that often has noise and corruptions...
August 11, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30176514/a-model-of-operant-learning-based-on-chaotically-varying-synaptic-strength
#16
Tianqi Wei, Barbara Webb
Operant learning is learning based on reinforcement of behaviours. We propose a new hypothesis for operant learning at the single neuron level based on spontaneous fluctuations of synaptic strength caused by receptor dynamics. These fluctuations allow the neural system to explore a space of outputs. If the receptor dynamics are altered by a reinforcement signal the neural system settles to better states, i.e., to match the environmental dynamics that determine reward. Simulations show that this mechanism can support operant learning in a feed-forward neural circuit, a recurrent neural circuit, and a spiking neural circuit controlling an agent learning in a dynamic reward and punishment situation...
August 11, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30143328/novel-deep-generative-simultaneous-recurrent-model-for-efficient-representation-learning
#17
M Alam, L Vidyaratne, K M Iftekharuddin
Representation learning plays an important role for building effective deep neural network models. Deep generative probabilistic models have shown to be efficient in the data representation learning task which is usually carried out in an unsupervised fashion. Throughout the past decade, there has been almost exclusive focus on the learning algorithms to improve representation capability of the generative models. However, effective data representation requires improvement in both learning algorithm and architecture of the generative models...
August 9, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30138751/design-of-deep-echo-state-networks
#18
Claudio Gallicchio, Alessio Micheli, Luca Pedrelli
In this paper, we provide a novel approach to the architectural design of deep Recurrent Neural Networks using signal frequency analysis. In particular, focusing on the Reservoir Computing framework and inspired by the principles related to the inherent effect of layering, we address a fundamental open issue in deep learning, namely the question of how to establish the number of layers in recurrent architectures in the form of deep echo state networks (DeepESNs). The proposed method is first analyzed and refined on a controlled scenario and then it is experimentally assessed on challenging real-world tasks...
August 8, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30142505/born-to-learn-the-inspiration-progress-and-future-of-evolved-plastic-artificial-neural-networks
#19
REVIEW
Andrea Soltoggio, Kenneth O Stanley, Sebastian Risi
Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifelong learning. The interplay of these elements leads to the emergence of biological intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) employ simulated evolution in-silico to breed plastic neural networks with the aim to autonomously design and create learning systems. EPANN experiments evolve networks that include both innate properties and the ability to change and learn in response to experiences in different environments and problem domains...
August 7, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30130679/fuzzy-c-means-based-architecture-reduction-of-a-probabilistic-neural-network
#20
Maciej Kusy
The efficiency of the probabilistic neural network (PNN) is very sensitive to the cardinality of a considered input data set. It results from the design of the network's pattern layer. In this layer, the neurons perform an activation on all input records. This makes the PNN architecture complex, especially for big data classification tasks. In this paper, a new algorithm for the structure reduction of the PNN is put forward. The solution relies on performing a fuzzy c-means data clustering and selecting PNN's pattern neurons on the basis of the obtained centroids...
August 4, 2018: Neural Networks: the Official Journal of the International Neural Network Society
journal
journal
29823
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"