journal
MENU ▼
Read by QxMD icon Read
search

Advances in Neural Information Processing Systems

journal
https://www.readbyqxmd.com/read/29657513/regularized-modal-regression-with-applications-in-cognitive-impairment-prediction
#1
Xiaoqian Wang, Hong Chen, Weidong Cai, Dinggang Shen, Heng Huang
Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis. However, most existing methods are built on least squares with the mean square error (MSE) criterion, which are sensitive to outliers and their performance may be degraded for heavy-tailed noise. In this paper, we go beyond this criterion by investigating the regularized modal regression from a statistical learning viewpoint. A new regularized modal regression model is proposed for estimation and variable selection, which is robust to outliers, heavy-tailed noise, and skewed noise...
December 2017: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29398882/gaussian-quadrature-for-kernel-features
#2
Tri Dao, Christopher De Sa, Christopher Ré
Kernel methods have recently attracted resurgent interest, showing performance competitive with deep neural networks in tasks such as speech recognition. The random Fourier features map is a technique commonly used to scale up kernel machines, but employing the randomized feature map means that O ( ε -2 ) samples are required to achieve an approximation error of at most ε . We investigate some alternative schemes for constructing feature maps that are deterministic, rather than random, by approximating the kernel in the frequency domain using Gaussian quadrature...
December 2017: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29391769/inferring-generative-model-structure-with-static-analysis
#3
Paroma Varma, Bryan He, Payal Bajaj, Imon Banerjee, Nishith Khandwala, Daniel L Rubin, Christopher Ré
Obtaining enough labeled data to robustly train complex discriminative models is a major bottleneck in the machine learning pipeline. A popular solution is combining multiple sources of weak supervision using generative models. The structure of these models affects training label quality, but is difficult to learn without any ground truth labels. We instead rely on these weak supervision sources having some structure by virtue of being encoded programmatically. We present Coral, a paradigm that infers generative model structure by statically analyzing the code for these heuristics, thus reducing the data required to learn structure significantly...
December 2017: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29375240/learning-to-compose-domain-specific-transformations-for-data-augmentation
#4
Alexander J Ratner, Henry R Ehrenberg, Zeshan Hussain, Jared Dunnmon, Christopher Ré
Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach...
December 2017: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29225449/action-centered-contextual-bandits
#5
Kristjan Greenewald, Ambuj Tewari, Predrag Klasnja, Susan Murphy
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees...
December 2017: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29872252/data-programming-creating-large-training-sets-quickly
#6
Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Christopher Ré
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict...
December 2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29657512/dual-decomposed-learning-with-factorwise-oracles-for-structural-svms-of-large-output-domain
#7
Ian E H Yen, Xiangru Huang, Kai Zhong, Ruohan Zhang, Pradeep Ravikumar, Inderjit S Dhillon
Many applications of machine learning involve structured outputs with large domains, where learning of a structured predictor is prohibitive due to repetitive calls to an expensive inference oracle. In this work, we show that by decomposing training of a Structural Support Vector Machine (SVM) into a series of multiclass SVM problems connected through messages, one can replace an expensive structured oracle with Factorwise Maximization Oracles (FMOs) that allow efficient implementation of complexity sublinear to the factor domain...
December 2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28713210/variance-reduction-in-stochastic-gradient-langevin-dynamics
#8
Avinava Dubey, Sashank J Reddi, Barnabás Póczos, Alexander J Smola, Eric P Xing, Sinead A Williamson
Stochastic gradient-based Monte Carlo methods such as stochastic gradient Langevin dynamics are useful tools for posterior inference on large scale datasets in many machine learning applications. These methods scale to large datasets by using noisy gradients calculated using a mini-batch or subset of the dataset. However, the high variance inherent in these noisy gradients degrades performance and leads to slower mixing. In this paper, we present techniques for reducing variance in stochastic gradient Langevin dynamics, yielding novel stochastic Monte Carlo methods that improve performance by reducing the variance in the stochastic gradient...
December 2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29308004/hypothesis-testing-in-unsupervised-domain-adaptation-with-applications-in-alzheimer-s-disease
#9
Hao Henry Zhou, Sathya N Ravi, Vamsi K Ithapu, Sterling C Johnson, Grace Wahba, Vikas Singh
Consider samples from two different data sources [Formula: see text] and [Formula: see text]. We only observe their transformed versions [Formula: see text] and [Formula: see text], for some known function class h (·) and g (·). Our goal is to perform a statistical test checking if P source = P target while removing the distortions induced by the transformations. This problem is closely related to domain adaptation, and in our case, is motivated by the need to combine clinical and imaging based biomarkers from multiple sites and/or batches - a fairly common impediment in conducting analyses with much larger sample sizes...
2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/29081639/a-sparse-interactive-model-for-matrix-completion-with-side-information
#10
Jin Lu, Guannan Liang, Jiangwen Sun, Jinbo Bi
Matrix completion methods can benefit from side information besides the partially observed matrix. The use of side features that describe the row and column entities of a matrix has been shown to reduce the sample complexity for completing the matrix. We propose a novel sparse formulation that explicitly models the interaction between the row and column side features to approximate the matrix entries. Unlike early methods, this model does not require the low rank condition on the model parameter matrix. We prove that when the side features span the latent feature space of the matrix to be recovered, the number of observed entries needed for an exact recovery is O (log N ) where N is the size of the matrix...
2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28867924/estimating-the-size-of-a-large-network-and-its-communities-from-a-random-sample
#11
Lin Chen, Amin Karbasi, Forrest W Crawford
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks...
2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28729779/deep-learning-models-of-the-retinal-response-to-natural-scenes
#12
Lane T McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, Stephen A Baccus
A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs)...
2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28344429/scan-order-in-gibbs-sampling-models-in-which-it-matters-and-bounds-on-how-much
#13
Bryan He, Christopher De Sa, Ioannis Mitliagkas, Christopher Ré
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions...
2016: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/27563230/beyond-sub-gaussian-measurements-high-dimensional-structured-estimation-with-sub-exponential-designs
#14
Vidyashankar Sivakumar, Arindam Banerjee, Pradeep Ravikumar
We consider the problem of high-dimensional structured estimation with norm-regularized estimators, such as Lasso, when the design matrix and noise are drawn from sub-exponential distributions. Existing results only consider sub-Gaussian designs and noise, and both the sample complexity and non-asymptotic estimation error have been shown to depend on the Gaussian width of suitable sets. In contrast, for the sub-exponential setting, we show that the sample complexity and the estimation error will depend on the exponential width of the corresponding sets, and the analysis holds for any norm...
December 2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/27330264/taming-the-wild-a-unified-analysis-of-hogwild-style-algorithms
#15
Christopher De Sa, Ce Zhang, Kunle Olukotun, Christopher Ré
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic...
December 2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/27279724/rapidly-mixing-gibbs-sampling-for-a-class-of-factor-graphs-using-hierarchy-width
#16
Christopher De Sa, Ce Zhang, Kunle Olukotun, Christopher Ré
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width-regardless of the data used to instantiate them...
December 2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/27182188/rate-agnostic-causal-structure-learning
#17
Sergey Plis, David Danks, Cynthia Freeman, Vince Calhoun
Causal structure learning from time series data is a major scientific challenge. Extant algorithms assume that measurements occur sufficiently quickly; more precisely, they assume approximately equal system and measurement timescales. In many domains, however, measurements occur at a significantly slower rate than the underlying system changes, but the size of the timescale mismatch is often unknown. This paper develops three causal structure learning algorithms, each of which discovers all dynamic causal graphs that explain the observed measurement data, perhaps given undersampling...
December 2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/27087766/semi-supervised-convolutional-neural-networks-for-text-categorization-via-region-embedding
#18
Rie Johnson, Tong Zhang
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks...
December 2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28615917/high-dimensional-em-algorithm-statistical-optimization-and-asymptotic-normality
#19
Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu
We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models. In particular, we make two contributions: (i) For parameter estimation, we propose a novel high dimensional EM algorithm which naturally incorporates sparsity structure into parameter estimation. With an appropriate initialization, this algorithm converges at a geometric rate and attains an estimator with the (near-)optimal statistical rate of convergence. (ii) Based on the obtained estimator, we propose new inferential procedures for testing hypotheses and constructing confidence intervals for low dimensional components of high dimensional parameters...
2015: Advances in Neural Information Processing Systems
https://www.readbyqxmd.com/read/28408793/optimal-linear-estimation-under-unknown-nonlinear-transform
#20
Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu
Linear regression studies the problem of estimating a model parameter β* ∈ℝ (p) , from n observations [Formula: see text] from linear model yi = 〈xi , β*〉 + ε i . We consider a significant generalization in which the relationship between 〈xi , β*〉 and yi is noisy, quantized to a single bit, potentially nonlinear, noninvertible, as well as unknown. This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing...
2015: Advances in Neural Information Processing Systems
journal
journal
32531
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"