The Probabilistic Inference Group (PIGS) is a paper discussion group with meetings held fortnightly. The group focuses on probabilistic and information theoretic approaches to machine learning problems. Meetings are generally held every Monday at 11am in room 4.31/4.33 of the Informatics Forum, though previous bookings mean some meetings will be in 2.33. Announcements are made through the PIGS mailing list.

We have moved to a themed and team based approach to PIGS meetings. These meetings will now happen in 4.31/4.33 at 11am weekly.

**Mon 31st March **Spectral learning for **LDS** http://www.cs.cmu.edu/~beb/files/BootsThesis.pdf, Byron Boots' thesis (2012), Chapter 2: A Spectral Learning Algorithm for Constant-Covariance Kalman Filters (Konstantinos, Chris W)

**Mon 7th April** Spectral learning for **MoG** http://arxiv.org/abs/1206.5766, Hsu and Kakade (2012), Learning mixtures of spherical Gaussians: moment methods and spectral decompositions (Benigno, Partha)

**Mon 14th April** Spectral learning for **LDA** http://arxiv.org/pdf/1204.6703v4.pdf Anandkumar et al. (2013), A Spectral Algorithm for Latent Dirichlet Allocation (Iain, Krzysztof)

**Mon 21st April** Spectral learning for ( **HMM** or **PCFG **or ?) (TBD)

- ICML 2012 tutorial for Spectral Approaches to Learning Latent Variable Models
- A technical report connecting EM with spectral learning for the LDS case
- Tensor Decompositions for Learning Latent Variable Models (Anandkumar et al.)
- ICML 2013 tutorial video for the above paper
- A more concise version (extended abstract) for spectral learning of MoG

Organiser: Matt (m.m.graham@edREMOVE_THIS.ac.uk)

**Mon 3rd March** Koller and Friedman (2009) Probabilistic Graphical Models, Chapter 21: Causality, part 1 (Boris, Chris W)

**Mon 10th March** Koller and Friedman (2009) Probabilistic Graphical Models, Chapter 21: Causality, part 2 (Agamemnon, Amos)

**Mon 17th March** Schölkopf et al. (2012) On Causal and Anticausal learning (Iain, Matt)

**Mon 24th March** Winn (2012) Causality with Gates (Amos, Zhanxing).

- Video tutorial by Phil Dawid on statistical causality
- NIPS 2008 Causality workshop (videos)
- NIPS 2013 Causality workshop (slides)
- Introductory material from Schölkopf group
- Chapters 21-23 of Advanced Data Analysis from an Elementary Point of View by Cosma Rohilla Shalizi

**Mon 2 December: **Gaussian Processes tutorial (Iain), slides

**Mon 13 January:** Agamemnon and Amos will read Information-based objective functions for active data selection, David J.C. MacKay Neural Computation 4, 589--603 (1992)

**Mon 20 January:** NIPS postcards.

**Mon 27 January:** Srinivas et al "Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting", In IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 3250-3265, 2012 http://las.ethz.ch/srinivas10gaussian-long.pdf (possibly also looking at Snoek et al Practical Bayesian Optimization of Machine Learning Algorithms http://arxiv.org/abs/1206.2944);) (Jari will lead);

**Mon 3 February:** Self-Paced Learning for Latent Variable Models, by Packer, Kumar, & Koller. Relates to curriculum learning rather than active learning per se. (Chris L. and Partha);

**Mon 10 February:** - 10/02 Ziyu Wang‚ Masrour Zoghi‚ Frank Hutter‚ David Matheson and Nando de Freitas, Bayesian Optimization in High Dimensions via Random Embeddings (Guido and Pavlos).

**Mon 7 October: ** Leon Bottou (2010) Large-Scale Machine Learning with Stochastic Gradient Descent. This is an introductory paper to Stochastic Gradient Descent. For those wanting a little more detail on online learning methods, Sebastian Bubeck's lecture notes may be helpful: Introduction to Online Optimization. (Amos, Konstantinos)

**Mon 14 October: **Non-stationary loss and adaptive learning rates: No more pesky learning rates (Beni, Jinli) **PLEASE NOTE THIS WILL BE IN 2.33 **This is an extension of that work, but will *not* be presented: Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients

**Mon 21 October:** Ahn, Korattikara and Welling (2012) Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. (Guido, Matt)

Hybrid of stochastic gradient descent and Langevin dynamics based MCMC sampling for learning and sampling from the posterior across model parameters using only small mini-batches of the dataset on each update

*Useful resources:*

Welling and Teh (2011), Bayesian Learning via Stochastic Gradient Langevin Dynamics - precursor paper to that we'll cover explaining how a stochastic (mini-batch) estimate of the log-likelihood gradient can be used with a Langevin dynamics based update to construct a Markov chain which will converge to the posterior over parameters (video presentation of paper)

Roberts and Tweedie (1996), Exponential convergence of Langevin distributions and their discrete approximations - describes the *Metropolis-adjusted Langevin algorithm* (MALA) for unbiased sampling using discretised Langevin dynamics

Video presentation by Max Welling explaining SGLD and SGFS

**Mon 28 October:** Le Roux, Manzagol and Bengio (2007) Topmoumoute Online Natural Gradient Algorithm. This Combines online learning with the idea of natural gradient. (Jeff, Mihai) **PLEASE NOTE THIS WILL BE IN 2.33**

**Mon 4 November:** Schmidt, Le Roux and Bach (2013) Minimizaing Finite Sums with the Stochastic Average Gradient. This provides interesting theoretical results on the right scheduling process for online learning methods (Zhanxing, Boris)

**Mon 11 November:** Discussion meeting covering a) potential research directions relating to stochastic gradients, practical suggestions on how and when to use them. b) Choosing people for the next PIGS theme.

**Mon 18 November:** We will discuss the practical decisions around using stochastic online methods. This will involve the brief review of the suggestions and empirical issues discussed in the following papers. We suggest a cursory look at the papers - we will not spend much time on any theoretical analyses this time...

- On Optimization Methods for Deep Learning - This will be the main focus.
- Sample Size Selection in Optimization Methods for Machine Learning
- HYBRID DETERMINISTIC-STOCHASTIC METHODS FOR DATA FITTING
- Sparse Online Learning via Truncated Gradient

For future reference 9 Dec, 23 Jun 2014 and 30 June 2014 will be in 2.33

Other papers:

Duchi, Hazan Singer (2010) Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. The subgradient is particularly important in general settings e.g. where we have potential non-differentiability. This is something that can be combined naturally with online learning. (?,?)

Feng Niu, Benjamin Recht, Christopher R e and Stephen J. Wright (2011) Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. This covers the issue of large scale parallelisation of the methods, which is pretty important inmany settings.

Past meetings

Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses

P. Loh and M.J. Wainwright

A Spectral Algorithm for Latent Dirichlet Allocation

Animashree Anandkumar, Dean P. Foster, Daniel Hsu, Sham M. Kakade, Yi-Kai Liu

Dual-Space Analysis of the Sparse Linear Model

David Wipf, Yi Wu

A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function

Pedro A. Ortega, Jordi Grau-Moya, Tim Genewein, David Balduzzi, Daniel A. Braun

Structure Discovery in Nonparametric Regression through Compositional Kernel Search

David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B. Tenenbaum, Zoubin Ghahramani

Gaussian Process Covariance Kernels for Pattern Discovery and Extrapolation

Andrew Gordon Wilson, Ryan Prescott Adams

Hoifung Poon and Pedro Domingos

Sum-Product Networks: A New Deep Architecture

James Bergstra, Yoshua Bengio

Random Search for Hyper-Parameter Optimization

Jasper Snoek, Hugo Larochelle, Ryan Adams

Practical Bayesian Optimization of Machine Learning Algorithms

R. Gens, P. Domingos. Discriminative Learning of Sum-Product Networks

S. C. Kou, Benjamin P. Olding, Martin Lysy & Jun S. Liu: A Multiresolution Method for Parameter Estimation of Diffusion Processes

http://www.tandfonline.com/doi/full/10.1080/01621459.2012.720899

We will review the proceedings of NIPS 2012. If you would like to discuss a paper, please edit this section to include the title together with your initials.

**AJS**: Learning from the Wisdom of Crowds by Minimax Entropy. Dengyong Zhou, John Platt, Sumit Basu, Yi Mao. http://books.nips.cc/papers/files/nips25/NIPS2012_1091.pdf

**SL: **MCMC for continuous-time discrete-state systems. Vinayak A Rao, Yee Whye Teh. http://books.nips.cc/papers/files/nips25/NIPS2012_0331.pdf

**KG: **Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. Lars Buesing, Jakob H. Macke, Maneesh Sahani. http://books.nips.cc/papers/files/nips25/NIPS2012_0806.pdf

**YZ: **Modelling Reciprocating Relationships with Hawkes Process. Charles Blundell, Katherine A. Heller. http://books.nips.cc/papers/files/nips25/NIPS2012_1231.pdf

"Reconceiving Machine Learning", Bob Williamson et al.

http://users.cecs.anu.edu.au/~williams/DPProposal.pdf

"Machine Learning that Matters", Kiri Wagstaff

http://icml.cc/2012/papers/298.pdf

"Improving neural networks by preventing co-adaptation of feature detectors"

G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R. R. Salakhutdinov

http://www.cs.toronto.edu/~hinton/absps/dropout.pdf

"Multiresolution Gaussian Processes"

E. B. Fox and D. B. Dunson

NIPS 2012

http://stat.duke.edu/sites/default/files/papers/2012-11.pdf

The scaled unscented transformation

S.J. Julier

Proceedings of the American Control Conference (2002)

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1025369&tag=1

New extension of the Kalman filter to nonlinear systems

S.J. Julier, J.K. Uhlmann

Signal Processing, Sensor Fusion, and Target Recognition (1997)

http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=925842

Schedule

Please fill in your name or contact Krzysztof Geras, if you would like to present and discuss papers in a specific research area of machine learning.

Please add papers you consider appropriate for PIGS. Please also add thematic categories that are not covered.

- UCL (including Gatsby) http://www.csml.ucl.ac.uk/reading_groups/
- Toronto http://learning.cs.toronto.edu/mlreading.html

Yoshua Bengio, Yves Grandvalet

http://www.jmlr.org/papers/volume5/grandvalet04a/grandvalet04a.pdf

Learning to Represent Spatial Transformations with Factored

Higher-Order Boltzmann Machines.

Neural Computation June 2010, Vol. 22, No. 6: 1473-1492.

http://www.cs.toronto.edu/~rfm/pubs/factored.pdf

http://jmlr.csail.mit.edu/proceedings/papers/v22/elidan12b/elidan12b.pdf

Copula Network Classifiers (CNCs)

http://jmlr.csail.mit.edu/proceedings/papers/v22/elidan12a/elidan12a.pdf

JRSS C (Appl. Stat.) 59 (2010) http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9876.2009.00701.x/abstract

Bayesian Model Checking and Model Diagnostics - Hal S. Stern and Sandip Sinharay

Handbook of Statistics, Vol. 25 (2005)

and

Induction and deduction in Bayesian data analysis, Andrew Gelman, RMM vol 2, 2011 67-78

Ioan will present the following papers:

Bayesian Conditional Cointegration - Chris Bracegirdle and David Barber

http://icml.cc/2012/papers/570.pdf

State-Space Inference and Learning with Gaussian Processes - Ryan Turner, Marc Deisenroth and Carl Rasmussen

http://jmlr.csail.mit.edu/proceedings/papers/v9/turner10a/turner10a.pdf

Yichuan will present the following paper:

Accelerated Adaptive Markov Chain for Partition Function Computation - S. Ermon, C. P. Gomes, A. Sabharwal, B. Selman (NIPS 2011)

Please add your initials below together with a link to the paper you wish to present.

IS: Approximate Inference in Additive Factorial HMMs with Application to Energy Disaggregation - J. Zico Kolter, Tommi Jaakkola

CS: **Lightning-speed Structure Learning of Nonlinear Continuous Networks** - Gal Elidan

KG: **Learning from Weak Teachers ** - Ruth Urner, Shai Ben-David and Ohad Shamir

PO: **Causality with Gates** - John Winn

CW:Deep Boltzmann Machines as Feed-Forward Hierarchies -- Montavon, Braun, Mueller

AS: Classifier Cascade for Minimizing Feature Evaluation Cost -- Minmin Chen, Zhixiang Xu, Kilian Weinberger, Olivier Chapelle, Dor Kedem

Andrea will discuss the following paper:

Designing attractive models via automated identification ofchaotic and oscillatory dynamical regimes: Silk D, Kirk PD, Barnes CP, Toni T, Rose A, Moon S, Dallman MJ, Stumpf

Simon will discuss the following papers:

Bayesian Compressive Sensing: S. Ji, Y. Xue, L. Carin

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4524050

Bayesian Compressive Sensing Via Belief Propagation: D. Baron , S. Sarvotham , R. G. Baraniuk

http://webee.technion.ac.il/people/drorb/pdf/CSBP012010.pdf

Botond will discuss the following papers:

Opper, Paquet, Winther: Improving on Expectation Propagation

http://www.ulrichpaquet.com/Papers/ImprovingOnEP.pdf

Opper, Paquet, Winther: Cumulant expansions for improved inference with EP in discrete Bayesian networks

http://las.ethz.ch/discml/papers/opper11cumulant.pdf

Chris will discuss the following paper:

Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection

Gavin Brown, Adam Pocock, Ming-Jie Zhao, Mikel Luján;

13(Jan):27−66, 2012.

Charles will discuss the following paper:

A Spectral Algorithm for Learning Hidden Markov Models

Daniel Hsu, Sham M. Kakade, Tong Zhang

Iain will discuss the following paper:

Statistical Tests for Optimization Efficiency: L. Boyles, A. Korattikara, D. Ramanan, M. Welling (NIPS 2011)

Jono will present the following papers:

Bayesian Bias Mitigation for Crowdsourcing: Fabian L. Wauthier, Michael I. Jordan. Proceedings of NIPS (2011) http://books.nips.cc/papers/files/nips24/NIPS2011_1021.pdf

A Collaborative Mechanism for Crowdsourcing Prediction Problem: Jacob D. Abernethy, Rafael M. Frongillo, Proceedings of NIPS (2011) http://books.nips.cc/papers/files/nips24/NIPS2011_1403.pdf

Amos will present the following paper:

Sparse Bayesian Multi-Task Learning, C. Archambeau, S. Guo, O. Zoeter, NIPS 2011.

Amos will discuss this paper with reference to other Bayesian approaches to sparsity. For example:

Bayesian Inference and Optimal Design in the Sparse Linear Model, M. Seeger, JMLR 2008

CW: Object Detection with Grammar Models - Ross B. Girshick, Pedro Felzenszwalb, David Mcallester

DR: Learning to Learn with Compound HD Models - Ruslan R. Salakhutdinov, Josh Tenenbaum, Antonio Torralba

BU: Selecting Receptive Fields in Deep Networks - Adam Coates, Andrew Y. Ng

PO: Variational Gaussian Process Dynamical Systems - Andreas C. Damianou, Michalis Titsias, Neil D. Lawrence

(mention) Sparse Inverse Covariance Estimation Using Quadratic Approximation - Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep K. Ravikumar

CS: Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent Benjamin Recht, Christopher Re, Stephen Wright, Feng Niu. I also liked Learning unbelievable probabilities Xaq S. Pitkow, Yashar Ahmadian, Ken D. Miller

KG: Co-Training for Domain Adaptation - Minmin Chen, Kilian Q. Weinberger, John Blitzer

- Modeling Item-Item Similarities for Personalized Recommendations on Yahoo! Front Page: Agarwal et. al,, Annals of applied Statistics (2011)
- A Flexible, Scalable and Efficient Algorithmic Framework for Primal Graphical Lasso: Mazumder and Agarwal: preprint (2011)

- Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems: Toni et al., JRSI (2009)
- Bayesian design of synthetic biological systems: Barnes, et al., PNAS (2010)

We will discuss the following papers:

- Lei Li, B. Aditya Prakash: Time Series Clustering: Complex is Simpler!. ICML 2011.
- Manuel Gomez Rodriguez, David Balduzzi, Bernhard Schölkopf: Uncovering the Temporal Dynamics of Diffusion Networks. ICML 2011.

We will discuss the following papers:

- K-H Cho, T. Raiko, A. Hin: Enhanced Gradient and Adaptive Learning Rate for Training Restricted Boltzmann Machines. ICML 2011.
- B. Schwehn: Using the Natural Gradient for training Restricted Boltzmann Machines. M.Sc thesis.

We will review the proceedings of UAI 2011. If you would like to discuss a paper, please edit this section to include the title together with your initials.

SL: Pitman-Yor Diffusion Trees - Knowles, Ghahramani

AJS: Sum Product Networks - Poon, Domingos

NH: Bregman divergence as general framework to estimate unnormalized statistical models - Gutmann, Hirayama

CKIW: Conditional Restricted Boltzmann Machines for Structured Output Prediction -- Mnih, Larochelle, Hinton

DR: Classification of Sets using Restricted Boltzmann Machines -- Louradour, Larochelle

- Guest lecture by Matthias Bethge.

- C. Pamminger and S. Fruhwirth-Schnatter, “Model-based clustering of categorical time series,” Bayesian Analysis, vol. 5, no. 2, pp. 345–368, 2010.

- A. U. Asuncion, Q. Liu, A.T. Ihler, P. Smyth: Particle Filtered MCMC-MLE with Connections to Contrastive Divergence. ICML 2010

- FA: A. Vattani, D. Chakrabarti, M. Gurevich: Preserving Personalized Pagerank in Subgraphs
- CW: R. Socher, C. Lin, A. Y. Ng, and C. D. Manning: Parsing Natural Scenes and Natural Language with Recursive Neural Networks
- PRO: X. Zhang, D. Dunson, L. Carin: Tree-Structured Infinite Sparse Factor Model
- IM: Max Welling and Yee Whye Teh: Bayesian learning via stochastic gradient Langevin dynamics
- FD: F. Doshi et al.: Infinite Dynamic Bayesian Networks

We will discuss the following paper:

- Andrew Saxe, pang Wei Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, Andrew Ng: On Random Weights and Unsupervised Feature Learning, ICML, 2011

We will discuss the following papers:

- Andreas Ruttor, Manfred Opper: Efficient statistical inference for stochastic reaction processes, Phys Rev Lett, 2009
- Andreas Ruttor, Manfred Opper: Approximate parameter inference in a stochastic reaction-diffusion model, Aistats, 2010

Title: Sparse Variational Inference for Multi-Task Learning

Please add your paper nominations together with your initials.

- JK: A. Courville, J. Bergstra, and Y. Bengio: A Spike and Slab Restricted Boltzmann Machine
- GS: Chris Bracegirdle, David Barber: Switch-Reset Models : Exact and Approximate Inference
- AS: Frederik Eaton: A conditional game for comparing approximations
- CW:
*Jaakko Peltonen, Samuel Kaski: Generative Modeling for Maximizing Precision and Recall in Information Visualization* - AE: I'd actually like to do the Larochelle and Murray paper if Iain isn't interested in doing it himself: H. Larochelle, I. Murray: The Neural Autoregressive Distribution Estimator. (That's fine, IM)
- IM: Two papers on matrix factorization: Lakshminarayanan, Bouchard and Archambeau, Robust Bayesian Matrix Factorisation and Balan, Boyles, Welling, Kim and Park Statistical Optimization of Non-Negative Matrix Factorization

We will discuss the following paper:

- Airoldi, Blei, Fienberg and Xing: Mixed Membership Stochastic Blockmodels

We will discuss the following papers:

- Andrieu, Doucet and Tadic: On-Line Parameter Estimation in General State-Space Models (sections I and II)
- Kantas, Doucet, Singh, Maciejowski: An overview of sequential Monte Carlo methods for parameter estimation in general state-space models

We will discuss the following paper:

- Brunel and d'Alché-Buc: Flow-Based Bayesian Estimation of Nonlinear Differential Equations for Modeling Biological Networks

We will discuss the following paper:

- Gianluigi Pillonetto and Francesco Dinuzzo and Giuseppe De Nicolao: Bayesian Online Multitask Learning of Gaussian Processes

We will discuss the following paper:

- Polson, N. G. & Scott, J. G.: Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction

We will discuss the following paper:

- E C Marshall and D J Spiegelhalter: Identifying outliers in Bayesian hierarchical models: a simulation-based approach, Bayesian Analysis 2(2) 409-444 (2007)

We will discuss the following paper:

- Olivier Breuleux, Yoshua Bengio, and Pascal Vincent, Neural Computation (in press): Quickly Generating Representative Samples from an RBM-Derived Process

- Loris Bazzani, Nando de Freitas, Jo-Anne Ting, Deep Learning and Unsupervised Feature Learning NIPS Workshop (2010): Learning attentional mechanisms for simultaneous object tracking and recognition with deep networks

We will discuss the following paper

- Andrew Gelfand, Yutian Chen, Laurens van der Maaten, Max Welling: On Herding and the Perceptron Cycling Theorem

- DR: Le at al.: Tiled convolutional neural networks (*)
- DR: Larochelle & Hinton: Learning to combine foveal glimpses with a third-order Boltzmann machine
- CW: Ackerman et al Towards Property-Based Classification of Clustering Paradigms
- CW: Ranzato et al Generating more realistic images using gated MRF's
- IS: Huang et al Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression
- IS: Kolter et al Energy Disaggregation via Discriminative Sparse Coding
- IM: Bickson and Guestrin Inference with Multivariate Heavy-Tails in Linear Models
- FD: Elidan: Copula Bayesian Networks

(*) DR: Due to my short notice decision to submit something to ICANN, I probably won't have time to look at the paper, so it's up for grabs.

Leave at bottom of list:

CW: There are lots of other papers I like that I hope someone will choose, e.g. Structured Determinantal Point Processes by Alex Kulesza, Ben Taskar; Tree-Structured Stick Breaking for Hierarchical Data, Ryan Adams, Zoubin Ghahramani, Michael Jordan; The Multidimensional Wisdom of Crowds

Peter Welinder, Steve Branson, Serge Belongie, Pietro Perona; Self-Paced Learning for Latent Variable Models

M. Pawan Kumar, Benjamin Packer, Daphne Koller; Learning Convolutional Feature Hierarchies for Visual Recognition, Kavukcuoglu et al; Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

Siwei Lyu; Global seismic monitoring as probabilistic inference,Nimar Arora, Stuart Russell, Paul Kidwell, Erik Sudderth (application); Energy Disaggregation via Discriminative Sparse Coding, J. Zico Kolter, Siddharth Batra, Andrew Ng (application)

IM: There are some other papers I could say a sentence or two about: "Tree-Structured Stick Breaking for Hierarchical Data" by Ryan Adams, Zoubin Ghahramani, Michael Jordan; "Global seismic monitoring as probabilistic inference" by Nimar Arora, Stuart Russell, Paul Kidwell, Erik Sudderth; "Label Embedding Trees for Large Multi-Class Tasks" by Samy Bengio, Jason Weston, David Grangier; Comparing "Movement extraction by detecting dynamics switches and repetitions" by Silvia Chiappa, Jan Peters and "Mixture of time-warped trajectory models for movement decoding" by Elaine Corbett, Eric Perreault, Konrad Koerding; "Self-Paced Learning for Latent Variable Models" by M. Pawan Kumar, Benjamin Packer, Daphne Koller.

- Visit PIGS meetings 2010 for a list of meetings held in 2010.

- Visit PIGS meetings 2009 for a list of meetings held in 2009.

- Visit PIGS meetings 2008 for a list of meetings held in 2008.

- Visit PIGS meetings 2007 for a list of meetings held in 2007.

- Visit the PIGSTwoThousandAndSixAndEarlier for earlier meetings

I | Attachment | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|

zip | icml-2up.zip | manage | 4178.0 K | 15 Jul 2008 - 12:53 | Main.s0565918 | |

latent-models-covariance.pdf | manage | 276.1 K | 20 Jul 2007 - 13:38 | Main.s9810791 | Latent models for cross-covariance (PIGS 24th July 2007) |

Topic revision: r361 - 28 Mar 2014 - 11:59:58 - Main.s1058681

Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding TWiki? Send feedback

This Wiki uses Cookies

Ideas, requests, problems regarding TWiki? Send feedback

This Wiki uses Cookies