Probabilistic Inference Group (PIGS) - Archive

Meetings in 2008

Tue 25 November 2008 (Chris Williams)

The discovery of structural form. Kemp, C. and Tenenbaum, J. B. (2008). Proceedings of the National Academy of Sciences. 105(31), 10687-10692.

see Josh's website for supporting information and commentary!

Tue 11 November 2008 (Jakup Piatkowski)

Tue 28 October 2008

Brendan J. Frey and Delbert Dueck, Science, Vol 315, No 5814, pp 972-976, February 2007: Clustering by passing messages between data points

See also accompanying "Perspective" article. Both papers are available here: http://www.psi.toronto.edu/pubs2/publications.php

Tue 21 October 2008 (Nicolas Heess)

Tue 23 September 2008 (Nicolas Heess)

Tue 09 September 2008 (Edwin Bonilla)

Tue 26 August 2008 (Kian Ming Chai)

http://www.jmlr.org/papers/volume8/pillai07a/pillai07a.pdf

Characterizing the Function Space for Bayesian Kernel Models

Natesh S. Pillai, Qiang Wu, Feng Liang, Sayan Mukherjee, Robert L. Wolpert; JMLR 8(Aug):1769--1797, 2007.

Kernel methods have been very popular in the machine learning literature in the last ten years, mainly in the context of Tikhonov regularization algorithms. In this paper we study a coherent Bayesian kernel model based on an integral operator defined as the convolution of a kernel with a signed measure. Priors on the random signed measures correspond to prior distributions on the functions mapped by the integral operator. We study several classes of signed measures and their image mapped by the integral operator. In particular, we identify a general class of measures whose image is dense in the reproducing kernel Hilbert space (RKHS) induced by the kernel. A consequence of this result is a function theoretic foundation for using non-parametric prior specifications in Bayesian modeling, such as Gaussian process and Dirichlet process prior distributions. We discuss the construction of priors on spaces of signed measures using Gaussian and LÚvy processes, with the Dirichlet processes being a special case the latter. Computational issues involved with sampling from the posterior distribution are outlined for a univariate regression and a high dimensional classification problem.

Tue 15 July 2008 (Amos Storkey)

UAI review (http://uai2008.cs.helsinki.fi/programme.shtml):

Tal El-Hay, Nir Friedman, Raz Kupferman: Gibbs Sampling in Factorized Continuous-Time Markov Processes

Gustavo Lacerda, Peter Spirtes, Joseph Ramsey, Patrik Hoyer: Discovering Cyclic Causal Models by Independent Components Analysis

Chong Wang, David Blei, David Heckerman Continuous Time Dynamic Topic Models

Tue 15 July 2008 (Chris Williams)

Download .zip file of all papers from http://icml2008.cs.helsinki.fi/papers/final-pdfs.zip, or papers below 2-up from https://wiki.inf.ed.ac.uk/pub/ANC/PIGS/icml-2up.zip

Paper #502: Data Spectroscopy: Learning Mixture Models using Eigenspaces of Convolution Operators. Tao Shi, Mikhail Belkin, and Bin Yu.

Paper #573: On the Quantitative Analysis of Deep Belief Networks. Ruslan Salakhutdinov and Iain Murray.

Paper #638: Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient. Tijmen Tieleman.

Paper #413: Modeling Interleaved Hidden Processes. Niels Landwehr.

Paper #266: SVM Optimization: Inverse Dependence on Training Set Size. Shai Shalev-Shwartz and Nathan Srebro.

Paper #588: An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators. Percy Liang and Michael Jordan.

Paper #520: Multi-Task Learning for HIV Therapy Screening. Steffen Bickel, Jasmina Bogojeska, Thomas Lengauer, and Tobias Scheffe

Paper #476: Improved Nystrom Low-Rank Approximation and Error Analysis. Kai Zhang, Ivor Tsang, and James Kwok.

Paper #241: Gaussian Process Product Models for Nonparametric Nonstationarity. Ryan Adams and Oliver Stegle.

Paper #419: Memory Bounded Inference in Topic Models. Ryan Gomes, Max Welling, and Pietro Perona.

Paper #182: Inverting the Viterbi Algorithm: an Abstract Framework for Structure Design. Michael Schnall-Levin, Leonid Chindelevitch, and Bonnie Berger

Tue 1 July 2008 (Lawrence Murray)

Fearnhead, P., Papaspiliopoulos, O. & Roberts, G.O. (2008) Particle filters for partially observed diffusions

For the temporally challenged, the following communication provides a very brief sketch of the essential points:

Fearnhead, P., Papaspiliopoulos, O. & Roberts, G.O. (2006) Particle filtering for diffusions avoiding time-discretisations IEEE Nonlinear Statistical Signal Processing Workshop, 141-143.

The following video and slides of Omiros Papaspiliopoulos's talk at a recent Newton Institute workshop may also be useful in understanding the material:

http://www.newton.ac.uk/webseminars/pg+ws/2008/sch/schw05/0620/papaspiliopoulos/

Tue 17 June 2008 (Edwin Bonilla)

Tue 11 May 2008 (Nicolas Heess)

Unsupervised Grammar Learning

Tue 29 Apr 2008 (Kian Ming Chai)

J. N. Corcoran and R. L. Tweedie (2002) Perfect sampling from independent Metropolis-Hastings chains http://dx.doi.org/10.1016/S0378-3758(01)00243-9

Tue 15 Apr 2008 (Amos Storkey)

Neal, R.M. Improving Asymptotic Variance of MCMC Estimators: Non-reversible Chains are Better. http://www.cs.toronto.edu/~radford/ftp/asymvar.pdf

Tue 1 Apr 2008 (Chris Williams)

Fukumizu, K., A. Gretton, X. Sun and B. Sch÷lkopf: Kernel Measures of Conditional Dependence. Proceedings of the 20th Neural Information Processing Systems Conference (NIPS 2007), 1-13, MIT Press, Cambridge, Mass., USA (in press) (01 2008)

http://www.ism.ac.jp/~fukumizu/papers/fukumizu_etal_nips2007_extended.pdf

Gretton, A., K. Fukumizu, C. H. Teo, L. Song, B. Sch÷lkopf and A. J. Smola: A Kernel Statistical Test of Independence. Proceedings of the 20th Annual Conference on Neural Information Processing Systems (NIPS 2007), 1-8, MIT Press, Cambridge, Mass., USA (in press) (01 2008)

http://www.kyb.mpg.de/publications/attachments/NIPS2007-Gretton_[0].pdf

Tue 11 Mar 2008 (Lawrence Murray)

Tue 26 Feb 2008 (Matthias Seeger)

Expectation Propagation -- Experimental Design for the Sparse Linear Model

Expectation propagation (EP) is a novel variational method for approximate Bayesian inference, which has given promising results in terms of computational efficiency and accuracy in several machine learning applications. It can readily be applied to inference in linear models with non-Gaussian priors, generalised linear models, or nonparametric Gaussian process models, among others, yet has not been used in Statistics so far to our knowledge. I will give an introduction to this framework. I will then show how to address sequential experimental design for a linear model with non-Gaussian sparsity priors, giving some results in two different machine learning applications. These results indicate that experimental design for these models may have significantly different properties than for linear-Gaussian models, where Bayesian inference is analytically tractable and experimental design seems best understood. In fact, in the applications we considered, the quality of sequentially optimised designs improved more significantly over random designs, if non-Gaussian priors were employed. A satisfactory explanation for this beneficial interplay would be of high importance, yet has not been given, to our knowledge. I will show recent results in the area of measuring images linearly, which shed interesting light on the very active area of compressed sensing.

Tue 12 Feb 2008 (Edwin Bonilla)

  • Shipeng Yu, Balaji Krishnapuram, Romer Rosales, Harald Steck, R. Bharat Rao. Bayesian Co-Training.

Tue 29 Jan 2008 (NIPS review 2)

Please select a paper on which to give a brief (5 min) overview and list it here.

Tue 15 Jan 2008 (NIPS review 1)

Please select a paper and list it here.

All papers as one PDF here (excluding CKIW's, see email).

Topic revision: r1 - 03 Apr 2009 - 11:21:03 - AthinaSpiliopoulou
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies