journal
MENU ▼
Read by QxMD icon Read
search

Neural Networks: the Official Journal of the International Neural Network Society

journal
https://www.readbyqxmd.com/read/30419480/a-neurodynamic-approach-to-nonlinear-optimization-problems-with-affine-equality-and-convex-inequality-constraints
#1
Na Liu, Sitian Qin
This paper presents a neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints. The proposed neural network endows with a time-varying auxiliary function, which can guarantee that the state of the neural network enters the feasible region in finite time and remains there thereafter. Moreover, the state with any initial point is shown to be convergent to the critical point set when the objective function is generally nonconvex. Especially, when the objective function is pseudoconvex (or convex), the state is proved to be globally convergent to an optimal solution of the considered optimization problem...
October 28, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408696/fixed-time-synchronization-of-inertial-memristor-based-neural-networks-with-discrete-delay
#2
Chuan Chen, Lixiang Li, Haipeng Peng, Yixian Yang
This paper is concerned with the fixed-time synchronization control of inertial memristor-based neural networks with discrete delay. We design four different kinds of feedback controllers, under which the considered inertial memristor-based neural networks can realize fixed-time synchronization perfectly. Moreover, the obtained fixed-time synchronization criteria can be verified by algebraic operations. For any initial synchronization error, the settling time of fixed-time synchronization is bounded by a fixed constant, which can be calculated beforehand based on system parameters and controller parameters...
October 25, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408697/evaluating-performance-of-neural-codes-in-model-neural-communication-networks
#3
Chris G Antonopoulos, Ezequiel Bianco-Martinez, Murilo S Baptista
Information needs to be appropriately encoded to be reliably transmitted over physical media. Similarly, neurons have their own codes to convey information in the brain. Even though it is well-known that neurons exchange information using a pool of several protocols of spatio-temporal encodings, the suitability of each code and their performance as a function of network parameters and external stimuli is still one of the great mysteries in neuroscience. This paper sheds light on this by modeling small-size networks of chemically and electrically coupled Hindmarsh-Rose spiking neurons...
October 23, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408694/unsupervised-feature-extraction-by-low-rank-and-sparsity-preserving-embedding
#4
Shanhua Zhan, Jigang Wu, Na Han, Jie Wen, Xiaozhao Fang
Manifold based feature extraction has been proved to be an effective technique in dealing with the unsupervised classification tasks. However, most of the existing works cannot guarantee the global optimum of the learned projection, and they are sensitive to different noises. In addition, many methods cannot catch the discriminative information as much as possible since they only exploit the local structure of data while ignoring the global structure. To address the above problems, this paper proposes a novel graph based feature extraction method named low-rank and sparsity preserving embedding (LRSPE) for unsupervised learning...
October 23, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408695/intrinsic-motivation-and-mental-replay-enable-efficient-online-adaptation-in-stochastic-recurrent-networks
#5
Daniel Tanneberg, Jan Peters, Elmar Rueckert
Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds...
October 22, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408692/implicit-incremental-natural-actor-critic-algorithm
#6
Ryo Iwaki, Minoru Asada
Natural policy gradient (NPG) methods are promising approaches to finding locally optimal policy parameters. The NPG approach works well in optimizing complex policies with high-dimensional parameters, and the effectiveness of NPG methods has been demonstrated in many fields. However, the incremental estimation of the NPG is computationally unstable owing to its high sensitivity to the step-sizes values, especially to the one used to update the estimate of NPG. In this study, we propose a new incremental and stable algorithm for the NPG estimation...
October 21, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30390521/a-fully-convolutional-two-stream-fusion-network-for-interactive-image-segmentation
#7
Yang Hu, Andrea Soltoggio, Russell Lock, Steve Carter
In this paper, we propose a novel fully convolutional two-stream fusion network (FCTSFN) for interactiveimage segmentation. The proposed network includes two sub-networks: a two-stream late fusion network (TSLFN) that predicts the foreground at a reduced resolution, and a multi-scale refining network (MSRN) that refines the foreground at full resolution. The TSLFN includes two distinct deep streams followed by a fusion network. The intuition is that, since user interactions are more direct information on foreground/background than the image itself, the two-stream structure of the TSLFN reduces the number of layers between the pure user interaction features and the network output, allowing the user interactions to have a more direct impact on the segmentation result...
October 21, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30414556/roles-for-globus-pallidus-externa-revealed-in-a-computational-model-of-action-selection-in-the-basal-ganglia
#8
Shreyas M Suryanarayana, Jeanette Hellgren Kotaleski, Sten Grillner, Kevin N Gurney
The basal ganglia are considered vital to action selection - a hypothesis supported by several biologically plausible computational models. Of the several subnuclei of the basal ganglia, the globus pallidus externa (GPe) has been thought of largely as a relay nucleus, and its intrinsic connectivity has not been incorporated in significant detail, in any model thus far. Here, we incorporate newly revealed subgroups of neurons within the GPe into an existing computational model of the basal ganglia, and investigate their role in action selection...
October 19, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30408693/variational-inference-with-gaussian-mixture-model-and-householder-flow
#9
GuoJun Liu, Yang Liu, MaoZu Guo, Peng Li, MingYu Li
The variational auto-encoder (VAE) is a powerful and scalable deep generative model. Under the architecture of VAE, the choice of the approximate posterior distribution is one of the crucial issues, and it has a significant impact on tractability and flexibility of the VAE. Generally, latent variables are assumed to be normally distributed with a diagonal covariance matrix, however, it is not flexible enough to match the true complex posterior distribution. We introduce a novel approach to design a flexible and arbitrarily complex approximate posterior distribution...
October 17, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30388429/dual-vigilance-fuzzy-adaptive-resonance-theory
#10
Leonardo Enzo Brito da Silva, Islam Elnabarawy, Donald C Wunsch
Clusters retrieved by generic Adaptive Resonance Theory (ART) networks are limited to their internal categorical representation. This study extends the capabilities of ART by incorporating multiple vigilance thresholds in a single network: stricter (data compression) and looser (cluster similarity) vigilance values are used to obtain a many-to-one mapping of categories-to-clusters. It demonstrates this idea in the context of Fuzzy ART, presented as Dual Vigilance Fuzzy ART (DVFA), to improve the ability to capture clusters with arbitrary geometry...
October 13, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30388430/the-importance-of-recurrent-top-down-synaptic-connections-for-the-anticipation-of-dynamic-emotions
#11
Martial Mermillod, Yannick Bourrier, Erwan David, Louise Kauffmann, Alan Chauvin, Nathalie Guyader, Frédéric Dutheil, Carole Peyrin
Different studies have shown the efficiency of a feed-forward neural network in categorizing basic emotional facial expressions. However, recent findings in psychology and cognitive neuroscience suggest that visual recognition is not a pure bottom-up process but likely involves top-down recurrent connectivity. In the present computational study, we compared the performances of a pure bottom-up neural network (a standard multi-layer perceptron, MLP) with a neural network involving recurrent top-down connections (a simple recurrent network, SRN) in the anticipation of emotional expressions...
October 9, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30336326/estimation-theory-and-neural-networks-revisited-rekf-and-rsvsf-as-optimization-techniques-for-deep-learning
#12
Mahmoud Ismail, Mina Attari, Saeid Habibi, Samir Ziada
Deep-Learning has become a leading strategy for artificial intelligence and is being applied in many fields due to its excellent performance that has surpassed human cognitive abilities in a number of classification and control problems (Ciregan, Meier, & Schmidhuber, 2012; Mnih et al., 2015). However, the training process of Deep-Learning is usually slow and requires high-performance computing, capable of handling large datasets. The optimization of the training method can improve the learning rate of the Deep-Learning networks and result in a higher performance while using the same number of training epochs (cycles)...
October 3, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30388431/neighborhood-preserving-neural-network-for-fault-detection
#13
Haitao Zhao, Zhihui Lai
A novel statistical feature extraction method, called the neighborhood preserving neural network (NPNN), is proposed in this paper. NPNN can be viewed as a nonlinear data-driven fault detection technique through preserving the local geometrical structure of normal process data. The "local geometrical structure " means that each sample can be constructed as a linear combination of its neighbors. NPNN is characterized by adaptively training a nonlinear neural network which takes the local geometrical structure of the data into consideration...
October 1, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30336327/reachable-set-estimation-for-markovian-jump-neural-networks-with-time-varying-delay
#14
Wen-Juan Lin, Yong He, Min Wu, Qingping Liu
This paper is concerned with the reachable set estimation for Markovian jump neural networks with time-varying delay and bounded peak inputs. The objective is to find a description of a reachable set that is containing all reachable states starting from the origin. In the framework of Lyapunov-Krasovskii functional method, an appropriate Lyapunov-Krasovskii functional is constructed firstly. Then by using the Wirtinger-based integral inequality and the extended reciprocally convex matrix inequality, an ellipsoidal description of the reachable set for the considered neural networks is derived...
September 29, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30317133/learning-in-the-machine-recirculation-is-random-backpropagation
#15
P Baldi, P Sadowski
Learning in physical neural systems must rely on learning rules that are local in both space and time. Optimal learning in deep neural architectures requires that non-local information be available to the deep synapses. Thus, in general, optimal learning in physical neural systems requires the presence of a deep learning channel to communicate non-local information to deep synapses, in a direction opposite to the forward propagation of the activities. Theoretical arguments suggest that for circular autoencoders, an important class of neural architectures where the output layer is identical to the input layer, alternative algorithms may exist that enable local learning without the need for additional learning channels, by using the forward activation channel as the deep learning channel...
September 27, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30312960/an-improved-stability-result-for-delayed-takagi-sugeno-fuzzy-cohen-grossberg-neural-networks
#16
Zeynep Orman
This work proposes a novel and improved delay independent global asymptotic stability criterion for delayed Takagi-Sugeno (T-S) fuzzy Cohen-Grossberg neural networks exploiting a suitable fuzzy-type Lyapunov functional in the presence of the nondecreasing activation functions having bounded slopes. The proposed stability criterion can be easily validated as it is completely expressed in terms of the system matrices of the fuzzy neural network model considered. It will be shown that the stability criterion obtained in this work for this type of fuzzy neural networks improves and generalizes some of the previously published stability results...
September 24, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30312961/multiple-mittag-leffler-stability-of-fractional-order-competitive-neural-networks-with-gaussian-activation-functions
#17
Pingping Liu, Xiaobing Nie, Jinling Liang, Jinde Cao
In this paper, we explore the coexistence and dynamical behaviors of multiple equilibrium points for fractional-order competitive neural networks with Gaussian activation functions. By virtue of the geometrical properties of activation functions, the fixed point theorem and the theory of fractional-order differential equation, some sufficient conditions are established to guarantee that such n-neuron neural networks have exactly 3k equilibrium points with 0≤k≤n, among which 2k equilibrium points are locally Mittag-Leffler stable...
September 21, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30312959/a-video-driven-model-of-response-statistics-in-the-primate-middle-temporal-area
#18
Omid Rezai, Pinar Boyraz Jentsch, Bryan Tripp
Neurons in the primate middle temporal area (MT) encode information about visual motion and binocular disparity. MT has been studied intensively for decades, so there is a great deal of information in the literature about MT neuron tuning. In this study, our goal is to consolidate some of this information into a statistical model of the MT population response. The model accepts arbitrary stereo video as input. It uses computer-vision methods to calculate known correlates of the responses (such as motion velocity), and then predicts activity using a combination of tuning functions that have previously been used to describe data in various experiments...
September 21, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30317134/echo-state-networks-are-universal
#19
Lyudmila Grigoryeva, Juan-Pablo Ortega
This paper shows that echo state networks are universal uniform approximants in the context of discrete-time fading memory filters with uniformly bounded inputs defined on negative infinite times. This result guarantees that any fading memory input/output system in discrete time can be realized as a simple finite-dimensional neural network-type state-space model with a static linear readout map. This approximation is valid for infinite time intervals. The proof of this statement is based on fundamental results, also presented in this work, about the topological nature of the fading memory property and about reservoir computing systems generated by continuous reservoir maps...
September 20, 2018: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/30317132/soft-hardwired-attention-an-lstm-framework-for-human-trajectory-prediction-and-abnormal-event-detection
#20
Tharindu Fernando, Simon Denman, Sridha Sridharan, Clinton Fookes
As humans we possess an intuitive ability for navigation which we master through years of practice; however existing approaches to model this trait for diverse tasks including monitoring pedestrian flow and detecting abnormal events have been limited by using a variety of hand-crafted features. Recent research in the area of deep-learning has demonstrated the power of learning features directly from the data; and related research in recurrent neural networks has shown exemplary results in sequence-to-sequence problems such as neural machine translation and neural image caption generation...
September 20, 2018: Neural Networks: the Official Journal of the International Neural Network Society
journal
journal
29823
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"