The Probabilistic Inference Group (PIGS) is a paper discussion group with meetings held fortnightly. The group focuses on probabilistic and information theoretic approaches to machine learning problems. Meetings are generally held fortnightly on Mondays at 10:30am in room 2.33 of the Informatics Forum. Announcements are made through the PIGS mailing list.

Instructions for presenters:

1) Choose a **mainstream** ML paper (or two).

2) Provide paper(s) at least **one week** in advance of the meeting.

3) Lead a discussion of the paper(s) in the meeting.

Mainstream means a paper that does not depend heavily on domain specific background to be comprehensible. A possible test would be to consider if the techniques could fairly readily be transferred to another application area. Papers from conferences like NIPS, ICML, UAI, AISTATS, and journals like JMLR and ML papers from IEEE PAMI are likely to be in scope; but note that papers from other sources could well fit too.

Students should discuss their paper selections with their supervisor to make sure they are reasonable choices. It is acceptable to relate the selected papers to the presenter's research, but not at the expense of discussion of the selected paper.

If people want to make thematic groupings of readings it should be possible to arrange swaps in the rota in order to make this happen.

Presentation of his KDD 2016 paper on: A Subsequence Interleaving Model for Sequential Pattern Mining

Abstract:

Recent sequential pattern mining methods have used the minimum description length (MDL) principle to define an encoding scheme which describes an algorithm for mining the most compressing patterns in a database. We present a novel subsequence interleaving model based on a probabilistic model of the sequence database, which allows us to search for the most compressing set of patterns without designing a specific encoding scheme. Our proposed algorithm is able to efficiently mine the most relevant sequential patterns and rank them using an associated measure of interestingness. The efficient inference in our model is a direct result of our use of a structural expectation-maximization framework, in which the expectation-step takes the form of a submodular optimization problem subject to a coverage constraint. We show on both synthetic and real world datasets that our model mines a set of sequential patterns with low spuriousness and redundancy, high interpretability and usefulness in real-world applications. Furthermore, we demonstrate that the quality of the patterns from our approach is comparable to, if not better than, existing state of the art sequential pattern mining algorithms.

Matt: Slice Sampling on Hamiltonian Trajectories, Benjamin Bloem-Reddy and John P. Cunningham

Harri: http://arxiv.org/abs/1602.03032 Associative Long Short-Term Memory | Ivo Danihelka *et al*.

Pol: Autoencoding beyond pixels using a learned similarity metric Larsen et. al

Gavin: Noisy Activation Functions Caglar Gulcehre *et al*

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, Geoffrey E. Hinton

Learning to decompose for object detection and instance segmentation

Eunbyung Park, Alexander C. Berg

Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation Jonathan Thompson, Arjun Jain, Yann LeCun and Christoph Bregler

Conditional Random Fields as Recurrent Neural Networks Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip H. S. Torr

Presentation of his ICML 2016 paper on: Efficient Multi-Instance Learning for Activity Recognition from Time Series Data Using an Auto-Regressive Hidden Markov Model

Abstract:

Activity recognition from sensor data has spurred a great deal of interest due to its impact on health care. Prior work on activity recognition from multivariate time series data has mainly applied supervised learning techniques which require a high degree of annotation effort to produce training data with the start and end times of each activity. In order to reduce the annotation effort, we present a weakly supervised approach based on multi-instance learning. We introduce a generative graphical model for multi-instance learning on time series data based on an auto-regressive hidden Markov model. Our approach models both the structure within an instance as well as the structure between instances in a bag. Our model has a number of advantages, including the ability to produce both bag and instance-level predictions as well as an efficient exact inference algorithm based on dynamic programming.

Knowledge Matters: Importance of Prior Information for Optimization C. Gulcehre and Yoshua Bengio

A survey of techniques for incremental learning of HMM parameters Wael Khreich Eric Granger, Ali Miri, Robert Sabourin

NICE: NON-LINEAR INDEPENDENT COMPONENTS ESTIMATION Laurent Dinh, David Krueger and Yoshua Bengio

Automatic Variational Inference in Stan Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman and David Blei

A note on the evaluation of generative models Lucas Theis, AÃƒÂ¤ron van den Oord and Matthias Bethge

Robust Spectral Inference for Joint Stochastic Matrix Factorization Moontae Lee, David Bindel and David Mimno

**George:** Bayesian Dark Knowledge Anoop Korattikara, Vivek Rathod, Kevin Murphy, Max Welling

**Mingjun:** Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference Edward Meeds, Max Welling

**Harri:** Semi-supervised learning with ladder networks Antti Rasmus, Harri Valpola et al

**Theo:** Generative Image Modeling Using Spatial LSTMs Lucas Theis, Matthias Bethge

**Chris:** Unsupervisd Learning by Program Synthesis Ellis, Solar-Lezama, Tenenbaum

**Krzysztof:** **:** Training Very Deep Networks Rupesh Kumar Srivastava, Klaus Greff, Jurgen Schmidhuber

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

Variational Dropout and the Local Reparameterization Trick Diederik P. Kingma, Tim Salimans, Max Welling

- Visit PIGS meetings 2015 for a list of meetings held in 2014 (Sep 2014 - Jul 2015).

- Visit PIGS meetings 2014 for a list of meetings held in 2014 (Sep 2013 - Jul 2014).

- Visit PIGS meetings 2013 for a list of meetings held in 2013 (until May).

- Visit PIGS meetings 2012 for a list of meetings held in 2012.

- Visit PIGS meetings 2011 for a list of meetings held in 2011.

- Visit PIGS meetings 2010 for a list of meetings held in 2010.

- Visit PIGS meetings 2009 for a list of meetings held in 2009.

- Visit PIGS meetings 2008 for a list of meetings held in 2008.

- Visit PIGS meetings 2007 for a list of meetings held in 2007.

- Visit the PIGSTwoThousandAndSixAndEarlier for earlier meetings

I | Attachment | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|

zip | icml-2up.zip | manage | 4178.0 K | 15 Jul 2008 - 12:53 | Main.s0565918 | |

latent-models-covariance.pdf | manage | 276.1 K | 20 Jul 2007 - 13:38 | Main.s9810791 | Latent models for cross-covariance (PIGS 24th July 2007) |

Topic revision: r433 - 10 Oct 2016 - 12:25:31 - AmosStorkey

Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding TWiki? Send feedback

This Wiki uses Cookies

Ideas, requests, problems regarding TWiki? Send feedback

This Wiki uses Cookies