Header logo is am

9 results

2012


no image
Bayesian estimation of free energies from equilibrium simulations

Habeck, M.

Physical Review Letters, 109(10):5, September 2012 (article)

Abstract
Free energy calculations are an important tool in statistical physics and biomolecular simulation. This Letter outlines a Bayesian method to estimate free energies from equilibrium Monte Carlo simulations. A Gibbs sampler is developed that allows efficient sampling of free energies and the density of states. The Gibbs sampling output can be used to estimate expected free energy differences and their uncertainties. The probabilistic formulation offers a unifying framework for existing methods such as the weighted histogram analysis method and the multistate Bennett acceptance ratio; both are shown to be approximate versions of the full probabilistic treatment.

Web DOI Project Page [BibTex]

2012

Web DOI Project Page [BibTex]


Thumb xl screen shot 2017 09 21 at 00.54.33
Entropy Search for Information-Efficient Global Optimization

Hennig, P., Schuler, C.

Journal of Machine Learning Research, 13, pages: 1809-1837, -, June 2012 (article)

Abstract
Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly adresses the decision problem of maximizing information gain from each evaluation.

PDF Web Project Page Project Page Project Page [BibTex]

PDF Web Project Page Project Page Project Page [BibTex]


no image
glm-ie: The Generalised Linear Models Inference and Estimation Toolbox

Nickisch, H.

Journal of Machine Learning Research, 13, pages: 1699-1703, May 2012 (article)

Abstract
The glm-ie toolbox contains scalable estimation routines for GLMs (generalised linear models) and SLMs (sparse linear models) as well as an implementation of a scalable convex variational Bayesian inference relaxation. We designed the glm-ie package to be simple, generic and easily expansible. Most of the code is written in Matlab including some The code is fully compatible to both Matlab 7.x and GNU Octave 3.3.x. Abstract Probabilistic classification, sparse linear modelling and logistic regression are covered in a common algorithmical framework.

PDF PDF Project Page [BibTex]

PDF PDF Project Page [BibTex]


no image
Evaluation of marginal likelihoods via the density of states

Habeck, M.

In Journal of Machine Learning Research, Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2012) , 22, pages: 486-494, (Editors: N Lawrence and M Girolami), JMLR: W&CP 22, AISTATS, 2012 (inproceedings)

Abstract
Bayesian model comparison involves the evaluation of the marginal likelihood, the expectation of the likelihood under the prior distribution. Typically, this high-dimensional integral over all model parameters is approximated using Markov chain Monte Carlo methods. Thermodynamic integration is a popular method to estimate the marginal likelihood by using samples from annealed posteriors. Here we show that there exists a robust and flexible alternative. The new method estimates the density of states, which counts the number of states associated with a particular value of the likelihood. If the density of states is known, computation of the marginal likelihood reduces to a one- dimensional integral. We outline a maximum likelihood procedure to estimate the density of states from annealed posterior samples. We apply our method to various likelihoods and show that it is superior to thermodynamic integration in that it is more flexible with regard to the annealing schedule and the family of bridging distributions. Finally, we discuss the relation of our method with Skilling's nested sampling.

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Kernel Topic Models

Hennig, P., Stern, D., Herbrich, R., Graepel, T.

In Fifteenth International Conference on Artificial Intelligence and Statistics, 22, pages: 511-519, JMLR Proceedings, (Editors: Lawrence, N. D. and Girolami, M.), JMLR.org, AISTATS , 2012 (inproceedings)

Abstract
Latent Dirichlet Allocation models discrete data as a mixture of discrete distributions, using Dirichlet beliefs over the mixture weights. We study a variation of this concept, in which the documents' mixture weight beliefs are replaced with squashed Gaussian distributions. This allows documents to be associated with elements of a Hilbert space, admitting kernel topic models (KTM), modelling temporal, spatial, hierarchical, social and other structure between documents. The main challenge is efficient approximate inference on the latent Gaussian. We present an approximate algorithm cast around a Laplace approximation in a transformed basis. The KTM can also be interpreted as a type of Gaussian process latent variable model, or as a topic model conditional on document features, uncovering links between earlier work in these areas.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]

2011


no image
Additive Gaussian Processes

Duvenaud, D., Nickisch, H., Rasmussen, C.

In Advances in Neural Information Processing Systems 24, pages: 226-234, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.

PDF Web Project Page [BibTex]

2011

PDF Web Project Page [BibTex]


no image
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference

Seeger, M., Nickisch, H.

In JMLR Workshop and Conference Proceedings Volume 15: AISTATS 2011, pages: 652-660, (Editors: Gordon, G. , D. Dunson, M. Dudík ), MIT Press, Cambridge, MA, USA, 14th International Conference on Artificial Intelligence and Statistics, April 2011 (inproceedings)

Abstract
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuous-variable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from (Opper&Winther, 2005) with covariance decoupling techniques (Wipf&Nagarajan, 2008; Nickisch&Seeger, 2009), it runs at least an order of magnitude faster than the most common EP solver.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]

2010


no image
Gaussian Processes for Machine Learning (GPML) Toolbox

Rasmussen, C., Nickisch, H.

Journal of Machine Learning Research, 11, pages: 3011-3015, November 2010 (article)

Abstract
The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace's method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.

Web Project Page [BibTex]

2010

Web Project Page [BibTex]

2008


no image
Approximations for Binary Gaussian Process Classification

Nickisch, H., Rasmussen, C.

Journal of Machine Learning Research, 9, pages: 2035-2078, October 2008 (article)

Abstract
We provide a comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification. The relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results. We examine both 1) the quality of the predictive distributions and 2) the suitability of the different marginal likelihood approximations for model selection (selecting hyperparameters) and compare to a gold standard based on MCMC. Interestingly, some methods produce good predictive distributions although their marginal likelihood approximations are poor. Strong conclusions are drawn about the methods: The Expectation Propagation algorithm is almost always the method of choice unless the computational budget is very tight. We also extend existing methods in various ways, and provide unifying code implementing all approaches.

PDF PDF Project Page [BibTex]

2008

PDF PDF Project Page [BibTex]