Optimal Transport


Optimal transport has a long history in mathematics which was proposed by Gaspard Monge in the eighteenth century [Old/New book]. The theory was later investigated by Nobelists (for a joint prize in economic sciences) Koopmans and Kantorovich as well as Fields medalists Villani (2010) and Figalli (2018). Recently, advances in optimal transport theory have paved the way for its use in the ML/AI community, particularly for formulating models and learning with high-dimensional data. This tutorial aims to introduce pivotal computational, practical aspects of OT as well as applica- tions of OT for unsupervised learning problems. The topics of this tutorial consist of three main parts. In the first part, we will present the theoretical and computational background of optimal transport theory. In the second part, we will summarize the application of OT estimating of deep generative models. The clustering and topic modelling methods with OT will be summarized in the last part of the tutorial. Implementation of algorithms and illustrative examples will also be presented.


Viet Huynh
Viet Huynh is currently a postdoctoral researcher at the Machine Learning team at Monash University. Before going to Monash, he was a postdoctoral researcher in the PRaDA (Pattern Recognition and Data Analytics) center at Deakin University. He was a PhD student in PRaDA center at Deakin University from 2013 to early 2017. He worked under the supervision of Professor Dinh Phung and Professor Svetha Venkatesh. His PhD work focused on resorting big data to actionable information involves dealing with four dimensions of challenges in big data (called four Vs): volume, variety, velocity, veracity. He also received his B.Eng. and M.Eng. degrees in computer science in 2005 and 2009 respectively, all of which were completed at the University of Technology, Vietnam. He is interested in developing large-scale learning algorithms for probabilistic graphical models with complex and large-scale data, applying optimal transport theory to understand challenging problems in machine learning and deep learning, and applying deep generative models for learning with probabilistic graphical models. His research has been published in prestigious machine learning venues such as the Journal of Machine Learning Research (JMLR), JMLR, NeurIPS, ICML, ICLR, AISTATS, etc.

He Zhao
He Zhao obtained his PhD degree in machine learning at the Department of Data Sci- ence and AI (DSAI) of Monash University in 2019 and has been working as a research fellow at the department since then, under the supervision of Prof Dinh Phung. Be- fore coming to Monash, he got his bachelor’s and master’s degrees from Nankai Uni- versity and Nanjing University, respectively. He is interested in statistical machine learning and deep learning, including Bayesian statistics, optimal transport, robust machine learning, as well as their applications in computer vision, natural language processing, data mining, etc. His research has been published prestigious machine learning venues such as NeurIPS, ICML, and ICLR. He is a senior program committee (PC) member of IJCAI 2021 and regularly serves as a reviewer or PC member for leading computer science conferences and journals including NeurIPS, ICML, ICLR, TPAMI, JMLR, etc.

Nhat Ho
Nhat Ho is currently an Assistant Professor of Statistics and Data Sciences at the University of Texas at Austin. He is also a core member of the Machine Learning Laboratory. Before going to Austin, he was a postdoctoral fellow in the Electrical Engineering and Computer Science (EECS) Department under the mentorship of Professor Michael I. Jordan and Professor Martin J. Wainwright. Going further back in time, he finished his Phd degree in 2017 at the Department of Statistics, University of Michigan, Ann Arbor where his advisors are Professor Long Nguyen and Professor Ya’acov Ritov. A central theme of his research focuses on four principles of machine learning, statistics, and data science: heterogeneity of data, interpretability of models, stability and scalability of optimization and sampling algorithms. He has published and submitted over 50 papers in the top conferences and journals of machine learning, statistics, and data science, such as ICML, NeurIPS, ICLR, AISTATS, ICCV, Journal of Machine Learning Research (JMLR), Annals of Statistics, and SIAM Journal on Mathematics of Data Science.

Dinh Phung
Dinh Phung is currently a Professor in the Faculty of Information Technology, Monash University, Australia. He obtained a PhD from Curtin University in 2005 in the areas of machine learning and multimedia computing. His primary interest includes theoretical and applied machine learning with a current focus on deep learning, robust and adversarial ML, optimal transport and point process theory for ML, generative AI, Bayesian nonparametrics and graphical models. He publishes regularly, with over 250+ publications, in the areas of machine learning, AI, data science and application domains such as natural language processing (NLP), computer vision, digital health, cybersecurity and autism. He has delivered several invited and keynote talks, served on 40+ organizing committees and technical program committees for top-notch conferences in machine learning and data analytics. He is currently the lead Editor-in-Chief for the forthcoming 3rd edition of the Encyclopedia of Machine Learning and Data Science and was also the Finalist for Australian Museum Eureka Prize for Excellence in Data Science in 2020.