journal
MENU ▼
Read by QxMD icon Read
search

Neural Networks: the Official Journal of the International Neural Network Society

journal
https://www.readbyqxmd.com/read/28732233/piecewise-convexity-of-artificial-neural-networks
#1
Blaine Rister, Daniel L Rubin
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results...
July 3, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28732230/maximum-likelihood-optimal-and-robust-support-vector-regression-with-lncosh-loss-function
#2
Omer Karal
In this paper, a novel and continuously differentiable convex loss function based on natural logarithm of hyperbolic cosine function, namely lncosh loss, is introduced to obtain Support Vector Regression (SVR) models which are optimal in the maximum likelihood sense for the hyper-secant error distributions. Most of the current regression models assume that the distribution of error is Gaussian, which corresponds to the squared loss function and has helpful analytical properties such as easy computation and analysis...
June 30, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28732232/periodicity-and-stability-for-variable-time-impulsive-neural-networks
#3
Hongfei Li, Chuandong Li, Tingwen Huang
The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks...
June 29, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28732231/kernel-dynamic-policy-programming-applicable-reinforcement-learning-to-robot-systems-with-high-dimensional-states
#4
Yunduan Cui, Takamitsu Matsubara, Kenji Sugimoto
We propose a new value function approach for model-free reinforcement learning in Markov decision processes involving high dimensional states that addresses the issues of brittleness and intractable computational complexity, therefore rendering the value function approach based reinforcement learning algorithms applicable to high dimensional systems. Our new algorithm, Kernel Dynamic Policy Programming (KDPP) smoothly updates the value function in accordance to the Kullback-Leibler divergence between current and updated policies...
June 29, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28646764/adaptive-near-optimal-neuro-controller-for-continuous-time-nonaffine-nonlinear-systems-with-constrained-input
#5
Kasra Esfandiari, Farzaneh Abdollahi, Heidar Ali Talebi
In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds...
June 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28646763/deep-neural-mapping-support-vector-machines
#6
Yujian Li, Ting Zhang
The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately...
June 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28672189/multi-scale-modeling-of-altered-synaptic-plasticity-related-to-amyloid-%C3%AE-effects
#7
Takumi Matsuzawa, László Zalányi, Tamás Kiss, Péter Érdi
As suggested by Palop and Mucke (2010) pathologically elevated β-amyloid (Aβ) impairs long term potentiation (LTP) and enhances long term depression (LTD) possible underlying mechanisms in Alzheimer's Disease (AD). In the present paper we adopt and further elaborate a phenomenological computational model of bidirectional plasticity based on the calcium control hypothesis of Shouval et al. (2002). First, to account for Aβ effects the activation function Ω was modified assuming competition between LTP and LTD, and parameter sets were identified that well describe both normal and pathological synaptic plasticity processes...
June 20, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28686946/novel-density-based-and-hierarchical-density-based-clustering-algorithms-for-uncertain-data
#8
Xianchao Zhang, Han Liu, Xiaotong Zhang
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN...
June 16, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28668660/accelerating-deep-neural-network-training-with-inconsistent-stochastic-gradient-descent
#9
Linnan Wang, Yi Yang, Renqiang Min, Srimat Chakradhar
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem...
June 16, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28651080/robust-recursive-absolute-value-inequalities-discriminant-analysis-with-sparseness
#10
Chun-Na Li, Zeng-Rong Zheng, Ming-Zeng Liu, Yuan-Hai Shao, Wei-Jie Chen
In this paper, we propose a novel absolute value inequalities discriminant analysis (AVIDA) criterion for supervised dimensionality reduction. Compared with the conventional linear discriminant analysis (LDA), the main characteristics of our AVIDA are robustness and sparseness. By reformulating the generalized eigenvalue problem in LDA to a related SVM-type "concave-convex" problem based on absolute value inequalities loss, our AVIDA is not only more robust to outliers and noises, but also avoids the SSS problem...
June 9, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28715693/recommender-system-based-on-scarce-information-mining
#11
Wei Lu, Fu-Lai Chung, Kunfeng Lai, Liang Zhang
Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed...
May 31, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28600976/master-slave-exponential-synchronization-of-delayed-complex-valued-memristor-based-neural-networks-via-impulsive-control
#12
Xiaofan Li, Jian-An Fang, Huiyuan Li
This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system...
May 25, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28646762/hybrid-impulsive-and-switching-hopfield-neural-networks-with-state-dependent-impulses
#13
Xianxiu Zhang, Chuandong Li, Tingwen Huang
We discuss the global stability of switching Hopfield neural networks (HNN) with state-dependent impulses using B-equivalence method. Under certain conditions, we show that the state-dependent impulsive switching systems can be reduced to the fixed-time ones, and that the global stability of corresponding comparison system implies the same stability of the considered system. On this basis, a novel stability criterion for the considered HNN is established. Finally, two numerical examples are given to demonstrate the effectiveness of our results...
May 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28599148/memristor-standard-cellular-neural-networks-computing-in-the-flux-charge-domain
#14
Mauro Di Marco, Mauro Forti, Luca Pancioni
The paper introduces a class of memristor neural networks (NNs) that are characterized by the following salient features. (a) The processing of signals takes place in the flux-charge domain and is based on the time evolution of memristor charges. The processing result is given by the constant asymptotic values of charges that are stored in the memristors acting as non-volatile memories in steady state. (b) The dynamic equations describing the memristor NNs in the flux-charge domain are analogous to those describing, in the traditional voltage-current domain, the dynamics of a standard (S) cellular (C) NN, and are implemented by using a realistic model of memristors as that proposed by HP...
May 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28582671/pinning-synchronization-of-memristor-based-neural-networks-with-time-varying-delays
#15
Zhanyu Yang, Biao Luo, Derong Liu, Yueheng Li
In this paper, the synchronization of memristor-based neural networks with time-varying delays via pinning control is investigated. A novel pinning method is introduced to synchronize two memristor-based neural networks which denote drive system and response system, respectively. The dynamics are studied by theories of differential inclusions and nonsmooth analysis. In addition, some sufficient conditions are derived to guarantee asymptotic synchronization and exponential synchronization of memristor-based neural networks via the presented pinning control...
May 18, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28575737/bayesian-geodesic-path-for-human-motor-control
#16
Ken Takiyama
Despite a near-infinite number of possible movement trajectories, our body movements exhibit certain invariant features across individuals; for example, when grasping a cup, individuals choose an approximately linear path from the hand to the cup. Based on these experimental findings, many researchers have proposed optimization frameworks to determine desired movement trajectories. Successful conventional frameworks include the geodesic path, which considers the geometry of our complicated body dynamics, and stochastic frameworks, which consider movement variability...
May 17, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28575735/fractional-order-leaky-integrate-and-fire-model-with-long-term-memory-and-power-law-dynamics
#17
Wondimu W Teka, Ranjit Kumar Upadhyay, Argha Mondal
Pyramidal neurons produce different spiking patterns to process information, communicate with each other and transform information. These spiking patterns have complex and multiple time scale dynamics that have been described with the fractional-order leaky integrate-and-Fire (FLIF) model. Models with fractional (non-integer) order differentiation that generalize power law dynamics can be used to describe complex temporal voltage dynamics. The main characteristic of FLIF model is that it depends on all past values of the voltage that causes long-term memory...
May 17, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28575736/collective-neurodynamic-optimization-for-economic-emission-dispatch-problem-considering-valve-point-effect-in-microgrid
#18
Tiancai Wang, Xing He, Tingwen Huang, Chuandong Li, Wei Zhang
The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them...
May 15, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28552508/synchronization-of-stochastic-reaction-diffusion-neural-networks-with-dirichlet-boundary-conditions-and-unbounded-delays
#19
Yin Sheng, Zhigang Zeng
In this paper, synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and unbounded discrete time-varying delays is investigated. By virtue of theories of partial differential equations, inequality methods, and stochastic analysis techniques, pth moment exponential synchronization and almost sure exponential synchronization of the underlying neural networks are developed. The obtained results in this study enhance and generalize some earlier ones. The effectiveness and merits of the theoretical criteria are substantiated by two numerical simulations...
May 10, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28552507/ordinal-regression-based-on-learning-vector-quantization
#20
Fengzhen Tang, Peter Tiňo
Recently, ordinal regression, which predicts categories of ordinal scale, has received considerable attention. In this paper, we propose a new approach to solve ordinal regression problems within the learning vector quantization framework. It extends the previous approach termed ordinal generalized matrix learning vector quantization with a more suitable and natural cost function, leading to more intuitive parameter update rules. Moreover, in our approach the bandwidth of the prototype weights is automatically adapted...
May 9, 2017: Neural Networks: the Official Journal of the International Neural Network Society
journal
journal
29823
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"