TWiki> ANC Web>PIGS (30 Apr 2016, Main.s1251804)EditAttach

Probabilistic Inference Group (PIGS)

The Probabilistic Inference Group (PIGS) is a paper discussion group with meetings held fortnightly. The group focuses on probabilistic and information theoretic approaches to machine learning problems. Meetings are generally held fortnightly on Mondays at 10:30am in room 2.33 of the Informatics Forum. Announcements are made through the PIGS mailing list.

Instructions for presenters:

1) Choose a mainstream ML paper (or two).
2) Provide paper(s) at least one week in advance of the meeting.
3) Lead a discussion of the paper(s) in the meeting.

Mainstream means a paper that does not depend heavily on domain specific background to be comprehensible. A possible test would be to consider if the techniques could fairly readily be transferred to another application area. Papers from conferences like NIPS, ICML, UAI, AISTATS, and journals like JMLR and ML papers from IEEE PAMI are likely to be in scope; but note that papers from other sources could well fit too.

Students should discuss their paper selections with their supervisor to make sure they are reasonable choices. It is acceptable to relate the selected papers to the presenter's research, but not at the expense of discussion of the selected paper.

If people want to make thematic groupings of readings it should be possible to arrange swaps in the rota in order to make this happen.

Upcoming discussions:

June 20: Lukasz

June 6: Charlie

May 23: Weng-Keen

Presentation of his ICML 2016 paper on: Efficient Multi-Instance Learning for Activity Recognition from Time Series Data Using an Auto-Regressive Hidden Markov Model

Abstract:

Activity recognition from sensor data has spurred a great deal of interest due to its impact on health care. Prior work on activity recognition from multivariate time series data has mainly applied supervised learning techniques which require a high degree of annotation effort to produce training data with the start and end times of each activity. In order to reduce the annotation effort, we present a weakly supervised approach based on multi-instance learning. We introduce a generative graphical model for multi-instance learning on time series data based on an auto-regressive hidden Markov model. Our approach models both the structure within an instance as well as the structure between instances in a bag. Our model has a number of advantages, including the ability to produce both bag and instance-level predictions as well as an efficient exact inference algorithm based on dynamic programming.

April 25: Amos

Knowledge Matters: Importance of Prior Information for Optimization C. Gulcehre and Yoshua Bengio

April 11: Chris


A survey of techniques for incremental learning of HMM parameters Wael Khreich Eric Granger, Ali Miri, Robert Sabourin

Past discussions:

March 14: Theo

NICE: NON-LINEAR INDEPENDENT COMPONENTS ESTIMATION Laurent Dinh, David Krueger and Yoshua Bengio

Feb 29: George

Automatic Variational Inference in Stan Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman and David Blei

Feb 15: Matt

A note on the evaluation of generative models Lucas Theis, A√ƒ¬§ron van den Oord and Matthias Bethge

Feb 1: Sohan

Robust Spectral Inference for Joint Stochastic Matrix Factorization Moontae Lee, David Bindel and David Mimno

Jan 18: NIPS 2015 review

George: Bayesian Dark Knowledge Anoop Korattikara, Vivek Rathod, Kevin Murphy, Max Welling

Mingjun: Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference Edward Meeds, Max Welling

Harri: Semi-supervised learning with ladder networks Antti Rasmus, Harri Valpola et al

Theo: Generative Image Modeling Using Spatial LSTMs Lucas Theis, Matthias Bethge

Chris: Unsupervisd Learning by Program Synthesis Ellis, Solar-Lezama, Tenenbaum

Krzysztof: : Training Very Deep Networks Rupesh Kumar Srivastava, Klaus Greff, Jurgen Schmidhuber

Oct 26: Harri

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

Oct 12: Gavin

Variational Dropout and the Local Reparameterization Trick Diederik P. Kingma, Tim Salimans, Max Welling

Meetings in 2015

Meetings in 2014

Meetings in 2013

Meetings in 2012

Meetings in 2011

Meetings in 2010

Meetings in 2009

Meetings in 2008

Meetings in 2007

Earlier meetings (2002-2006) on old website

Topic attachments
I Attachment Action Size Date Who Comment
zipzip icml-2up.zip manage 4178.0 K 15 Jul 2008 - 12:53 Main.s0565918  
pdfpdf latent-models-covariance.pdf manage 276.1 K 20 Jul 2007 - 13:38 Main.s9810791 Latent models for cross-covariance (PIGS 24th July 2007)
Topic revision: r423 - 30 Apr 2016 - 16:25:03 - Main.s1251804
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies