journal
MENU ▼
Read by QxMD icon Read
search

Neural Computation

journal
https://www.readbyqxmd.com/read/29652591/a-reinforcement-learning-neural-network-for-robotic-manipulator-control
#1
Yazhou Hu, Bailu Si
We propose a neural network model for reinforcement learning to control a robotic manipulator with unknown parameters and dead zones. The model is composed of three networks. The state of the robotic manipulator is predicted by the state network of the model, the action policy is learned by the action network, and the performance index of the action policy is estimated by a critic network. The three networks work together to optimize the performance index based on the reinforcement learning control scheme. The convergence of the learning methods is analyzed...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652590/why-does-large-batch-training-result-in-poor-generalization-a-comprehensive-explanation-and-a-better-strategy-from-the-viewpoint-of-stochastic-optimization
#2
Tomoumi Takase, Satoshi Oyama, Masahito Kurihara
We present a comprehensive framework of search methods, such as simulated annealing and batch training, for solving nonconvex optimization problems. These methods search a wider range by gradually decreasing the randomness added to the standard gradient descent method. The formulation that we define on the basis of this framework can be directly applied to neural network training. This produces an effective approach that gradually increases batch size during training. We also explain why large batch training degrades generalization performance, which previous studies have not clarified...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652589/distributed-newton-methods-for-deep-neural-networks
#3
Chien-Chih Wang, Kent Loong Tan, Chun-Ting Chen, Yu-Hsiang Lin, S Sathiya Keerthi, Dhruv Mahajan, S Sundararajan, Chih-Jen Lin
Deep learning involves a difficult nonconvex optimization problem with a large number of weights between any two adjacent layers of a deep structure. To handle large data sets or complicated networks, distributed training is needed, but the calculation of function, gradient, and Hessian is expensive. In particular, the communication and the synchronization cost may become a bottleneck. In this letter, we focus on situations where the model is distributedly stored and propose a novel distributed Newton method for training deep neural networks...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652588/joint-estimation-of-effective-brain-wave-activation-modes-using-eeg-meg-sensor-arrays-and-multimodal-mri-volumes
#4
Vitaly L Galinsky, Antigona Martinez, Martin P Paulus, Lawrence R Frank
In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652587/superspike-supervised-learning-in-multilayer-spiking-neural-networks
#5
Friedemann Zenke, Surya Ganguli
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652586/a-computational-account-of-the-role-of-cochlear-nucleus-and-inferior-colliculus-in-stabilizing-auditory-nerve-firing-for-auditory-category-learning
#6
Irina Higgins, Simon Stringer, Jan Schnupp
It is well known that auditory nerve (AN) fibers overcome bandwidth limitations through the volley principle, a form of multiplexing. What is less well known is that the volley principle introduces a degree of unpredictability into AN neural firing patterns that may be affecting even simple stimulus categorization learning. We use a physiologically grounded, unsupervised spiking neural network model of the auditory brain with spike time dependent plasticity learning to demonstrate that plastic auditory cortex is unable to learn even simple auditory object categories when exposed to the raw AN firing input without subcortical preprocessing...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652585/a-theory-of-sequence-indexing-and-working-memory-in-recurrent-neural-networks
#7
E Paxon Frady, Denis Kleyko, Friedrich T Sommer
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and cross-talk noise limits the retrieval accuracy and information capacity...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652584/optimal-readout-of-correlated-neural-activity-in-a-decision-making-circuit
#8
Matias Calderini, Sophie Zhang, Nareg Berberian, Jean-Philippe Thivierge
The neural correlates of decision making have been extensively studied with tasks involving a choice between two alternatives that is guided by visual cues. While a large body of work argues for a role of the lateral intraparietal (LIP) region of cortex in these tasks, this role may be confounded by the interaction between LIP and other regions, including medial temporal (MT) cortex. Here, we describe a simplified linear model of decision making that is adapted to two tasks: a motion discrimination and a categorization task...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652583/new-families-of-skewed-higher-order-kernel-estimators-to-solve-the-bss-ica-problem-for-multimodal-sources-mixtures
#9
Ahmed Najah Jabbar
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian)...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652582/robust-mst-based-clustering-algorithm
#10
Qidong Liu, Ruisheng Zhang, Zhili Zhao, Zhenghai Wang, Mengyao Jiao, Guangjing Wang
Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noise and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652581/a-learning-framework-for-winner-take-all-networks-with-stochastic-synapses
#11
Hesham Mostafa, Gert Cauwenberghs
Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29652580/methods-for-assessment-of-memory-reactivation
#12
Shizhao Liu, Andres D Grosmark, Zhe Chen
It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing memory reactivation. To date, several statistical methods have seen established for assessing memory reactivation based on bursts of ensemble neural spike activity during offline states. Using population-decoding methods, we propose a new statistical metric, the weighted distance correlation, to assess hippocampal memory reactivation (i...
April 13, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566357/solving-constraint-satisfaction-problems-with-distributed-neocortical-like-neuronal-networks
#13
Ueli Rutishauser, Jean-Jacques Slotine, Rodney J Douglas
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566356/designing-patient-specific-optimal-neurostimulation-patterns-for-seizure-suppression
#14
Roman A Sandler, Kunling Geng, Dong Song, Robert E Hampson, Mark R Witcher, Sam A Deadwyler, Theodore W Berger, Vasilis Z Marmarelis
Neurostimulation is a promising therapy for abating epileptic seizures. However, it is extremely difficult to identify optimal stimulation patterns experimentally. In this study, human recordings are used to develop a functional 24 neuron network statistical model of hippocampal connectivity and dynamics. Spontaneous seizure-like activity is induced in silico in this reconstructed neuronal network. The network is then used as a testbed to design and validate a wide range of neurostimulation patterns. Commonly used periodic trains were not able to permanently abate seizures at any frequency...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566355/predictive-coding-in-area-v4-dynamic-shape-discrimination-under-partial-occlusion
#15
Hannah Choi, Anitha Pasupathy, Eric Shea-Brown
The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566354/novel-perceptually-uniform-chromatic-space
#16
María da Fonseca, Inés Samengo
Chromatically perceptive observers are endowed with a sense of similarity between colors. For example, two shades of green that are only slightly discriminable are perceived as similar, whereas other pairs of colors, for example, blue and yellow, typically elicit markedly different sensations. The notion of similarity need not be shared by different observers. Dichromat and trichromat subjects perceive colors differently, and two dichromats (or two trichromats, for that matter) may judge chromatic differences inconsistently...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566353/slowness-as-a-proxy-for-temporal-predictability-an-empirical-comparison
#17
Björn Weghenkel, Laurenz Wiskott
The computational principles of slowness and predictability have been proposed to describe aspects of information processing in the visual system. From the perspective of slowness being a limited special case of predictability we investigate the relationship between these two principles empirically. On a collection of real-world data sets we compare the features extracted by slow feature analysis (SFA) to the features of three recently proposed methods for predictable feature extraction: forecastable component analysis, predictable feature analysis, and graph-based predictable feature analysis...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566352/deep-semisupervised-zero-shot-learning-with-maximum-mean-discrepancy
#18
Lingling Zhang, Jun Liu, Minnan Luo, Xiaojun Chang, Qinghua Zheng
Due to the difficulty of collecting labeled images for hundreds of thousands of visual categories, zero-shot learning, where unseen categories do not have any labeled images in training stage, has attracted more attention. In the past, many studies focused on transferring knowledge from seen to unseen categories by projecting all category labels into a semantic space. However, the label embeddings could not adequately express the semantics of categories. Furthermore, the common semantics of seen and unseen instances cannot be captured accurately because the distribution of these instances may be quite different...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566351/pattern-storage-bifurcations-and-groupwise-correlation-structure-of-an-exactly-solvable-asymmetric-neural-network-model
#19
Diego Fasoli, Anna Cattani, Stefano Panzeri
Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution...
March 22, 2018: Neural Computation
https://www.readbyqxmd.com/read/29566350/hidden-quantum-processes-quantum-ion-channels-and-1-f-%C3%AE-type-noise
#20
Alan Paris, Azadeh Vosoughi, Stephen A Berman, George Atia
In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise depends solely on the eigenvalues (time constants) of the kinetic matrix but that the Lorentzian weighting coefficients depend entirely on the eigenvectors of this matrix. We then show that there are rotations of the kinetic eigenvectors that send any initial weights to any target weights without altering the time constants...
March 22, 2018: Neural Computation
journal
journal
31799
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"