journal
MENU ▼
Read by QxMD icon Read
search

Neural Networks: the Official Journal of the International Neural Network Society

journal
https://www.readbyqxmd.com/read/28525811/an-online-supervised-learning-method-based-on-gradient-descent-for-spiking-neurons
#1
Yan Xu, Jing Yang, Shuiming Zhong
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses...
April 27, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28500895/a-universal-multilingual-weightless-neural-network-tagger-via-quantitative-linguistics
#2
Hugo C C Carneiro, Carlos E Pedreira, Felipe M G França, Priscila M V Lima
In the last decade, given the availability of corpora in several distinct languages, research on multilingual part-of-speech tagging started to grow. Amongst the novelties there is mWANN-Tagger (multilingual weightless artificial neural network tagger), a weightless neural part-of-speech tagger capable of being used for mostly-suffix-oriented languages. The tagger was subjected to corpora in eight languages of quite distinct natures and had a remarkable accuracy with very low sample deviation in every one of them, indicating the robustness of weightless neural systems for part-of-speech tagging tasks...
April 26, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28494328/robust-stability-analysis-of-quaternion-valued-neural-networks-with-time-delays-and-parameter-uncertainties
#3
Xiaofeng Chen, Zhongshan Li, Qiankun Song, Jin Hu, Yuanshun Tan
This paper addresses the problem of robust stability for quaternion-valued neural networks (QVNNs) with leakage delay, discrete delay and parameter uncertainties. Based on Homeomorphic mapping theorem and Lyapunov theorem, via modulus inequality technique of quaternions, some sufficient conditions on the existence, uniqueness, and global robust stability of the equilibrium point are derived for the delayed QVNNs with parameter uncertainties. Furthermore, as direct applications of these results, several sufficient conditions are obtained for checking the global robust stability of QVNNs without leakage delay as well as complex-valued neural networks (CVNNs) with both leakage and discrete delays...
April 26, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28494329/hopfield-networks-as-a-model-of-prototype-based-category-learning-a-method-to-distinguish-trained-spurious-and-prototypical-attractors
#4
Chris Gorman, Anthony Robins, Alistair Knott
We present an investigation of the potential use of Hopfield networks to learn neurally plausible, distributed representations of category prototypes. Hopfield networks are dynamical models of autoassociative memory which learn to recreate a set of input states from any given starting state. These networks, however, will almost always learn states which were not presented during training, so called spurious states. Historically, spurious states have been an undesirable side-effect of training a Hopfield network and there has been much research into detecting and discarding these unwanted states...
April 25, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28499191/spatiotemporal-signal-classification-via-principal-components-of-reservoir-states
#5
Ashley Prater
Reservoir computing is a recently introduced machine learning paradigm that has been shown to be well-suited for the processing of spatiotemporal data. Rather than training the network node connections and weights via backpropagation in traditional recurrent neural networks, reservoirs instead have fixed connections and weights among the 'hidden layer' nodes, and traditionally only the weights to the output layer of neurons are trained using linear regression. We claim that for signal classification tasks one may forgo the weight training step entirely and instead use a simple supervised clustering method based upon principal components of reservoir states...
April 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28482227/probabilistic-lower-bounds-for-approximation-by-shallow-perceptron-networks
#6
Věra Kůrková, Marcello Sanguineti
Limitations of approximation capabilities of shallow perceptron networks are investigated. Lower bounds on approximation errors are derived for binary-valued functions on finite domains. It is proven that unless the number of network units is sufficiently large (larger than any polynomial of the logarithm of the size of the domain) a good approximation cannot be achieved for almost any uniformly randomly chosen function on a given domain. The results are obtained by combining probabilistic Chernoff-Hoeffding bounds with estimates of the sizes of sets of functions exactly computable by shallow networks with increasing numbers of units...
April 19, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28478372/a-framework-for-parallel-and-distributed-training-of-neural-networks
#7
Simone Scardapane, Paolo Di Lorenzo
The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern. In such distributed scenario, the training problem can be formulated as the (regularized) optimization of a non-convex social cost function, given by the sum of local (non-convex) costs, where each agent contributes with a single error term defined with respect to its local dataset...
April 19, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28478371/recursive-least-mean-p-power-extreme-learning-machine
#8
Jing Yang, Feng Ye, Hai-Jun Rong, Badong Chen
As real industrial processes have measurement samples with noises of different statistical characteristics and obtain the sample one by one usually, on-line sequential learning algorithms which can achieve better learning performance for systems with noises of various statistics are necessary. This paper proposes a new online Extreme Learning Machine (ELM, of Huang et al.) algorithm, namely recursive least mean p-power ELM (RLMP-ELM). In RLMP-ELM, a novel error criterion for cost function, namely the least mean p-power (LMP) error criterion, provides a mechanism to update the output weights sequentially...
April 12, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28458015/lagrange-%C3%AE-exponential-stability-and-%C3%AE-exponential-convergence-for-fractional-order-complex-valued-neural-networks
#9
Jigui Jian, Peng Wan
This paper deals with the problem on Lagrange α-exponential stability and α-exponential convergence for a class of fractional-order complex-valued neural networks. To this end, some new fractional-order differential inequalities are established, which improve and generalize previously known criteria. By using the new inequalities and coupling with the Lyapunov method, some effective criteria are derived to guarantee Lagrange α-exponential stability and α-exponential convergence of the addressed network...
April 12, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28460305/event-triggered-h%C3%A2-filtering-for-delayed-neural-networks-via-sampled-data
#10
Emel Arslan, R Vadivel, M Syed Ali, Sabri Arik
This paper is concerned with event-triggered H∞ filtering for delayed neural networks via sampled data. A novel event-triggered scheme is proposed, which can lead to a significant reduction of the information communication burden in the network; the feature of this scheme is that whether or not the sampled data should be transmitted is determined by the current sampled data and the error between the current sampled data and the latest transmitted data. By constructing a proper Lyapunov-Krasovskii functional, utilizing the reciprocally convex combination technique and Jensen's inequality sufficient conditions are derived to ensure that the resultant filtering error system is asymptotically stable...
April 11, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28385624/weighted-spatial-based-geometric-scheme-as-an-efficient-algorithm-for-analyzing-single-trial-eegs-to-improve-cue-based-bci-classification
#11
Fatemeh Alimardani, Reza Boostani, Benjamin Blankertz
There is a growing interest in analyzing the geometrical behavior of electroencephalogram (EEG) covariance matrix in the context of brain computer interface (BCI). The bottleneck of the current Riemannian framework is the bias of the mean vector of EEG signals to the noisy trials, which deteriorates the covariance matrix in the manifold space. This study presents a spatial weighting scheme to reduce the effect of noisy trials on the mean vector. To assess the proposed method, dataset IIa from BCI competition IV, containing the EEG trials of 9 subjects performing four mental tasks, was utilized...
March 22, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28396068/evaluating-deep-learning-architectures-for-speech-emotion-recognition
#12
Haytham M Fayek, Margaret Lech, Lawrence Cavedon
Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular...
March 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28385623/how-can-a-recurrent-neurodynamic-predictive-coding-model-cope-with-fluctuation-in-temporal-patterns-robotic-experiments-on-imitative-interaction
#13
Ahmadreza Ahmadi, Jun Tani
The current paper examines how a recurrent neural network (RNN) model using a dynamic predictive coding scheme can cope with fluctuations in temporal patterns through generalization in learning. The conjecture driving this present inquiry is that a RNN model with multiple timescales (MTRNN) learns by extracting patterns of change from observed temporal patterns, developing an internal dynamic structure such that variance in initial internal states account for modulations in corresponding observed patterns. We trained a MTRNN with low-dimensional temporal patterns, and assessed performance on an imitation task employing these patterns...
March 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28458082/a-bag-of-paths-framework-for-network-data-analysis
#14
Kevin Françoisse, Ilkka Kivimäki, Amin Mantrach, Fabrice Rossi, Marco Saerens
This work develops a generic framework, called the bag-of-paths (BoP), for link and network data analysis. The central idea is to assign a probability distribution on the set of all paths in a network. More precisely, a Gibbs-Boltzmann distribution is defined over a bag of paths in a network, that is, on a representation that considers all paths independently. We show that, under this distribution, the probability of drawing a path connecting two nodes can easily be computed in closed form by simple matrix inversion...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28410513/representation-learning-via-dual-autoencoder-for-recommendation
#15
Fuzhen Zhuang, Zhiqiang Zhang, Mingda Qian, Chuan Shi, Xing Xie, Qing He
Recommendation has provoked vast amount of attention and research in recent decades. Most previous works employ matrix factorization techniques to learn the latent factors of users and items. And many subsequent works consider external information, e.g., social relationships of users and items' attributions, to improve the recommendation performance under the matrix factorization framework. However, matrix factorization methods may not make full use of the limited information from rating or check-in matrices, and achieve unsatisfying results...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28390225/persistent-irregular-activity-is-a-result-of-rebound-and-coincident-detection-mechanisms-a-computational-study
#16
Mustafa Zeki, Ahmed A Moustafa
Persistent irregular activity is defined as elevated irregular neural discharges in the brain in such a way that while the average network activity displays high frequency oscillations, the participating neurons display irregular and low frequency oscillations. This type of activity is observed in many brain regions like prefrontal cortex that plays a role in working memory. Previous studies have shown that large networks with sparse connections, networks with strong noise and persistent inhibition and networks with structured synaptic connections display persistent-irregular activity...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28388473/collective-mutual-information-maximization-to-unify-passive-and-positive-approaches-for-improving-interpretation-and-generalization
#17
Ryotaro Kamimura
The present paper aims to propose a simple method to realize mutual information maximization for better interpretation and generalization. To train neural networks and obtain better performance, neurons should impartially consider as many input patterns as possible. Simultaneously, and especially for ease of interpretation, they should represent characteristics specific to certain input patterns as faithfully as possible. This contradiction can be solved by introducing mutual information between neurons and input patterns...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28388472/robust-fixed-time-synchronization-for-uncertain-complex-valued-neural-networks-with-discontinuous-activation-functions
#18
Xiaoshuai Ding, Jinde Cao, Ahmed Alsaedi, Fuad E Alsaadi, Tasawar Hayat
This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28388471/extending-the-stabilized-supralinear-network-model-for-binocular-image-processing
#19
Ben Selby, Bryan Tripp
The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28365399/synchronised-firing-patterns-in-a-random-network-of-adaptive-exponential-integrate-and-fire-neuron-model
#20
F S Borges, P R Protachevicz, E L Lameu, R C Bonetti, K C Iarosz, I L Caldas, M S Baptista, A M Batista
We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron...
June 2017: Neural Networks: the Official Journal of the International Neural Network Society
journal
journal
29823
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"