journal
MENU ▼
Read by QxMD icon Read
search

Neural Networks: the Official Journal of the International Neural Network Society

journal
https://www.readbyqxmd.com/read/28910740/generation-of-low-gamma-oscillations-in-a-gabaergic-network-model-of-the-striatum
#1
Zhihua Wu, Aike Guo, Xiaodi Fu
Striatal oscillations in the low-gamma frequency range have been consistently recorded in a number of experimental studies. However, whether these rhythms are locally generated in the striatum circuit, which is mainly composed of GABAergic neurons, remains an open question. GABAergic medium spiny projection neurons represent the great majority of striatal neurons, but they fire at very low rates. GABAergic fast-spiking interneurons typically show firing rates that are approximately 10 times higher than those of principal neurons, but they are a very small minority of the total neuronal population...
September 11, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28892672/spiking-neural-p-systems-with-multiple-channels
#2
Hong Peng, Jinyu Yang, Jun Wang, Tao Wang, Zhang Sun, Xiaoxiao Song, Xiaohui Luo, Xiangnian Huang
Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computing systems inspired from the neurophysiological behavior of biological spiking neurons. In this paper, we investigate a new variant of SNP systems in which each neuron has one or more synaptic channels, called spiking neural P systems with multiple channels (SNP-MC systems, in short). The spiking rules with channel label are introduced to handle the firing mechanism of neurons, where the channel labels indicate synaptic channels of transmitting the generated spikes...
August 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28888132/entropy-factor-for-randomness-quantification-in-neuronal-data
#3
K Rajdl, P Lansky, L Kostal
A novel measure of neural spike train randomness, an entropy factor, is proposed. It is based on the Shannon entropy of the number of spikes in a time window and can be seen as an analogy to the Fano factor. Theoretical properties of the new measure are studied for equilibrium renewal processes and further illustrated on gamma and inverse Gaussian probability distributions of interspike intervals. Finally, the entropy factor is evaluated from the experimental records of spontaneous activity in macaque primary visual cortex and compared to its theoretical behavior deduced for the renewal process models...
August 15, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28843090/neural-network-for-regression-problems-with-reduced-training-sets
#4
Mohammad Bataineh, Timothy Marler
Although they are powerful and successful in many applications, artificial neural networks (ANNs) typically do not perform well with complex problems that have a limited number of training cases. Often, collecting additional training data may not be feasible or may be costly. Thus, this work presents a new radial-basis network (RBN) design that overcomes the limitations of using ANNs to accurately model regression problems with minimal training data. This new design involves a multi-stage training process that couples an orthogonal least squares (OLS) technique with gradient-based optimization...
August 9, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28843092/a-patch-based-convolutional-neural-network-for-remote-sensing-image-classification
#5
Atharva Sharma, Xiuwen Liu, Xiaojun Yang, Di Shi
Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures...
August 8, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28843091/fast-dcnn-based-on-fwt-intelligent-dropout-and-layer-skipping-for-image-retrieval
#6
Asma ElAdel, Mourad Zaied, Chokri Ben Amar
Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification and retrieval. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on Fast Wavelet Transform (FWT), intelligent dropout and layer skipping. The proposed approach led to improve the image retrieval accuracy as well as the searching time. This was possible thanks to three key advantages: First, the rapid way to compute the features using FWT...
August 8, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28822323/discriminative-clustering-on-manifold-for-adaptive-transductive-classification
#7
Zhao Zhang, Lei Jia, Min Zhang, Bing Li, Li Zhang, Fanzhang Li
In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space...
August 1, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28806718/stochastic-separation-theorems
#8
A N Gorban, I Y Tyukin
The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Based on fundamental properties of measure concentration, we show that for M<aexp(bn) random M-element sets in R(n) are linearly separable with probability p, p>1-ϑ, where 1>ϑ>0 is a given small constant...
July 31, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28806717/efficient-construction-of-sparse-radial-basis-function-neural-networks-using-l1-regularization
#9
Xusheng Qian, He Huang, Xiaoping Chen, Tingwen Huang
This paper investigates the construction of sparse radial basis function neural networks (RBFNNs) for classification problems. An efficient two-phase construction algorithm (which is abbreviated as TPCLR1 for simplicity) is proposed by using L1 regularization. In the first phase, an improved maximum data coverage (IMDC) algorithm is presented for the initialization of RBF centers and widths. Then a specialized Orthant-Wise Limited-memory Quasi-Newton (sOWL-QN) method is employed to perform simultaneous network pruning and parameter optimization in the second phase...
July 27, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28806716/efficient-hardware-implementation-of-the-subthalamic-nucleus-external-globus-pallidus-oscillation-system-and-its-dynamics-investigation
#10
Shuangming Yang, Xile Wei, Jiang Wang, Bin Deng, Chen Liu, Haitao Yu, Huiyan Li
Modeling and implementation of the nonlinear neural system with physiologically plausible dynamic behaviors are considerably meaningful in the field of computational neuroscience. This study introduces a novel hardware platform to investigate the dynamical behaviors within the nonlinear subthalamic nucleus-external globus pallidus system. In order to reduce the implementation complexities, a hardware-oriented conductance-based subthalamic nucleus (STN) model is presented, which can reproduce accurately the dynamical characteristics of biological conductance-based STN cells...
July 26, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28850900/a-multivariate-extension-of-mutual-information-for-growing-neural-networks
#11
Kenneth R Ball, Christopher Grant, William R Mundy, Timothy J Shafer
Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e...
July 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28802162/efficient-dynamic-graph-construction-for-inductive-semi-supervised-learning
#12
F Dornaika, R Dahbi, A Bosaghzadeh, Y Ruichek
Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph...
July 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28793243/few-shot-learning-in-deep-networks-through-global-prototyping
#13
Sebastian Blaes, Thomas Burwick
Training a deep convolution neural network (CNN) to succeed in visual object classification usually requires a great number of examples. Here, starting from such a pre-learned CNN, we study the task of extending the network to classify additional categories on the basis of only few examples ("few-shot learning"). We find that a simple and fast prototype-based learning procedure in the global feature layers ("Global Prototype Learning", GPL) leads to some remarkably good classification results for a large portion of the new classes...
July 24, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28806715/recurrent-networks-with-soft-thresholding-nonlinearities-for-lightweight-coding
#14
MohammadMehdi Kafashan, ShiNung Ching
A long-standing and influential hypothesis in neural information processing is that early sensory networks adapt themselves to produce efficient codes of afferent inputs. Here, we show how a nonlinear recurrent network provides an optimal solution for the efficient coding of an afferent input and its history. We specifically consider the problem of producing lightweight codes, ones that minimize both ℓ1 and ℓ2 constraints on sparsity and energy, respectively. When embedded in a linear coding paradigm, this problem results in a non-smooth convex optimization problem...
July 22, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28806714/f-norm-distance-metric-based-robust-2dpca-and-face-recognition
#15
Tao Li, Mengyuan Li, Quanxue Gao, Deyan Xie
Two-dimensional principal component analysis (2DPCA) employs squared F-norm as the distance metric for dimensionality reduction. It is commonly known that squared F-norm is sensitive to the presence of outliers. To address this problem, we use F-norm instead of squared F-norm as the distance metric in the objective function and develop a non-greedy algorithm, which has a closed-form solution in each iteration and can maximize the criterion function, to solve the optimal solution. Our approach not only is robust to outliers but also well characterizes the geometric structure of data...
July 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28797759/robustness-of-learning-algorithms-using-hinge-loss-with-outlier-indicators
#16
Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda
We propose a unified formulation of robust learning methods for classification and regression problems. In the learning methods, the hinge loss is used with outlier indicators in order to detect outliers in the observed data. To analyze the robustness property, we evaluate the breakdown point of the learning methods in the situation that the outlier ratio is not necessarily small. Although minimization of the hinge loss with outlier indicators is a non-convex optimization problem, we prove that any local optimal solution of our learning algorithms has the robustness property...
July 21, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28772239/a-novel-deep-learning-algorithm-for-incomplete-face-recognition-low-rank-recovery-network
#17
Jianwei Zhao, Yongbiao Lv, Zhenghua Zhou, Feilong Cao
There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters...
July 20, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28779599/bump-competition-and-lattice-solutions-in-two-dimensional-neural-fields
#18
August Romeo, Hans Supèr
Some forms of competition among activity bumps in a two-dimensional neural field are studied. First, threshold dynamics is included and rivalry evolutions are considered. The relations between parameters and dominance durations can match experimental observations about ageing. Next, the threshold dynamics is omitted from the model and we focus on the properties of the steady-state. From noisy inputs, hexagonal grids are formed by a symmetry-breaking process. Particular issues about solution existence and stability conditions are considered...
July 19, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28772240/dynamic-response-and-transfer-function-of-social-systems-a-neuro-inspired-model-of-collective-human-activity-patterns
#19
Ilias N Lymperopoulos
The interaction of social networks with the external environment gives rise to non-stationary activity patterns reflecting the temporal structure and strength of exogenous influences that drive social dynamical processes far from an equilibrium state. Following a neuro-inspired approach, based on the dynamics of a passive neuronal membrane, and the firing rate dynamics of single neurons and neuronal populations, we build a state-of-the-art model of the collective social response to exogenous interventions. In this regard, we analyze online activity patterns with a view to determining the transfer function of social systems, that is, the dynamic relationship between external influences and the resulting activity...
July 14, 2017: Neural Networks: the Official Journal of the International Neural Network Society
https://www.readbyqxmd.com/read/28756334/error-bounds-for-approximations-with-deep-relu-networks
#20
Dmitry Yarotsky
We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture...
July 13, 2017: Neural Networks: the Official Journal of the International Neural Network Society
journal
journal
29823
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"