## Avrim Blum |
## Online Learning, Regret Minimization, and Game Theory [slides] [video]Avrim Blum is Professor of Computer Science in the Department of Computer Science at Carnegie Mellon University.The first part of his tutorial will discuss adaptive algorithms for making decisions in uncertain environments (e.g., what route should I take to work if I have to decide before I know what traffic will like today?) and connections to central concepts in game theory (e.g., what can we say about how traffic will behave overall if everyone is adapting their behavior in such a way?). He will discuss the notions of external and internal regret, algorithms for "combining expert advice" and "sleeping experts" problems, algorithms for implicitly specified problems, and connections to game-theoretic notions of Nash and correlated equilibria. The second part of his tutorial will be about some recent work on learning with similarity functions that are not necessarily legal kernels. The high-level question here is: if you have a measure of similarity between data points, how closely related does it have to be to your classification problem in order to be useful for learning? |

## Wray Buntine |
## Latent Variable Models for Document Analysis [slides] [video]Wray Buntine is a researcher in the Statistical Machine Learning group at NICTA's Canberra Laboratory.He will consider various problems in document analysis (named entity recognition, natural language parsing, information retrieval), and look at various probabilistic graphical models and algorithms for addressing the problem. This will not be an extensive coverage of information extraction or natural language processing, but rather a look at some of the theory, methods and practice of particular cases, including the use of software environments. |

## Tiberio Caetano |
## Inference in Graphical Models [slides] [video]Tiberio Caetano is senior researcher at NICTA's Canberra Laboratory, where he is a member of the Statistical Machine Learning research program.His short course will cover the basics of inference in graphical models. It will start by explaining the theory of probabilistic graphical models, including concepts of conditional independence and factorisation and how they arise in both Markov random fields and Bayesian Networks. He will then present the fundamental methods for performing exact probabilistic inference in such models, which include algorithms like variable elimination, belief propagation and Junction Trees. He will also briefly discuss some of the current methods for performing approximate inference when exact inference is not feasible. Finally, he will illustrate a range of real problems whose solutions can be formulated as inference in graphical models. |

## Nando de Freitas |
## Monte Carlo Simulation for Statistical Inference, Model Selection and Decision Making [slides] [video]Nando de Freitas is Associate professor in the Department of Computer Science at the University of British Columbia .The first part of his course will consist of two presentations. In the first presentation, he will introduce fundamentals of Monte Carlo simulation for statistical inference, with emphasis on algorithms such as importance sampling, particle filtering and smoothing for dynamic models, Markov chain Monte Carlo, Gibbs and Metropolis-Hastings, blocking and mixtures of MCMC kernels, Monte Carlo EM, sequential Monte Carlo for static models, auxiliary variable methods (Swedsen-Wang, hybrid Monte Carlo and slice sampling), and adaptive MCMC. The algorithms will be illustrated with several examples: image tracking, robotics, image annotation, probabilistic graphical models, and music analysis. The second presentation will target model selection and decision making problems. He will describe the reversible-jump MCMC algorithm and illustrate it with application to simple mixture models and nonlinear regression with an unknown number of basis functions. He will show how to apply this algorithm to general Markov decision processes (MDPs). The course will also cover other Monte Carlo simulation methods for partially observed Markov decision processes (POMDPs) using policy gradients, common random number generation, and active exploration with Gaussian processes. An outline to some applications of these methods to robotics and the design of computer game architectures will be given. The presentation will end with the problem of Monte Carlo simulation for Bayesian nonlinear experimental design, with application to financial modeling, robot exploration, drug treatments, dynamic sensor networks, optimal measurement and active vision. |

## Marcus Hutter |
## Introduction and Foundations of Machine Learning [slides1] [video1] [slides2] [video2]Marcus Hutter is Associate Professor in the RSISE at the Australian National University in Canberra and NICTA adjunct.The first part of his tutorial provides a brief overview of the fundamental methods and applications of statistical machine learning. The other speakers will detail or built upon this introduction. Statistical machine learning is concerned with the development of algorithms and techniques that learn from observed data by constructing stochastic models that can be used for making predictions and decisions. Topics covered include Bayesian inference and maximum likelihood modeling; regression, classification, density estimation, clustering, principal component analysis; parametric, semi-parametric, and non-parametric models; basis functions, neural networks, kernel methods, and graphical models; deterministic and stochastic optimization; overfitting, regularization, and validation. Machine learning is usually taught as a bunch of methods that can solve a bunch of problems (see above). The second part of the tutorial takes a step back and asks about the foundations of machine learning, in particular the (philosophical) problem of inductive inference, (Bayesian) statistics, and artificial intelligence. It concentrates on principled, unified, and exact methods. |

## Rao Kotagiri |
## Contrast Data Mining: Methods and Applications [slides] [video]Ramamohanarao (Rao) Kotagiri is Professor in the Department of Computer Science and Software Engineering at the University of Melbourne.The ability to distinguish, differentiate and contrast between different datasets is a key objective in data mining. Such an ability can assist domain experts to understand their data, and can help in building classification models. His presentation will introduce the principal techniques for contrasting different types of data, covering the main dataset varieties such as relational, sequence, and graph forms of data, clusters, as well as data cubes. It will also focus on some important real world application areas that illustrate how mining contrasts is advantageous. |

## Simon Lucey |
## Learning in Computer Vision [slides] [video]Dr. Simon Lucey is Systems Scientist in the Robotics Institute at Carnegie Mellon University.In his tutorial he will cover some of the core fundamentals in vision and demonstrate how they can be interpreted in terms of machine learning fundamentals. Unbeknownst to most researchers in the field of machine learning, the fundamentals of object registration and tracking such as optical flow, interest descriptors (e.g., SIFT), segmentation and correlation filters are inherently related to the learning topics of regression, regularization, graphical models, generative models and discriminative models. As a result many aspects of vision can be interpreted as applied forms of learning. From this discussion on fundamentals we shall also explore advanced topics in object registration and tracking such as non-rigid object alignment/ tracking and non-rigid structure from motion and how the application of machine learning is continuing to improve these technologies. |

## Alex Smola |
## Kernel methods and Support Vector Machines [slides] [video]Alex Smola is leader of NICTA's Statistical Machine Learning Program in Canberra and Professor in the Computer Sciences Laboratory at the Australian National University.His tutorial will introduce the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. This includes a derivation of the support vector optimization problem for classification and regression, the v-trick, various kernels and an overview over applications of kernel methods. |

## Csaba Szepesvari |
## Introduction to Reinforcement Learning [slides] [video]Csaba Szepesvari is Associate Professor in the Department of Computing Science at University of Alberta.His tutorial will introduce Reinforcement Learning, that is, learning what actions to take, and when to take them, so as to optimize long-term performance. This may involve sacrificing immediate reward to obtain greater reward in the long-term or just to obtain more information about the environment. The first part of the tutorial will cover the basics, such as Markov decision processes, dynamic programming, temporal-difference learning, Monte Carlo methods, eligibility traces, the role of function approximation. In the second part we cover some recent developments, namely policy gradient and second order methods, such as LSPI and the modified Bellman residual minimization algorithm. |

## Vishy Vishwanathan |
## Machine Learning Laboratory [slides] [video]Dr. Vishwanathan is senior researcher at NICTA's Canberra Laboratory, where he is a member of the Statistical Machine Learning research program.The first laboratory on March 7 will feature some hands on experiments with Elefant (http://elefant.developer.nicta.com.au) mainly concentrating on installing, using, and developing machine learning algorithms within the Elefant framework. We will walk through examples of implementing a simple stochastic gradient descent algorithm as a part of this tutorial. The second session on March 14 will feature hands on experiments with BNRM (Bundle Methods for Regularized Risk Minimization) (http://users.rsise.anu.edu.au/~chteo/BMRM.html). The emphasis here will be on developing various loss function modules which can then be plugged into the BMRM solver. CD's and USB sticks containing installation instructions for Elefant and BMRM will be handed out during the session. Preferably bring your own laptops, although some spare PC's might be made available. Students will also have a chance to interact with the leading developers of Elefant and BMRM. |