Tutorial 1 (T1-a): High Dimensional Causation Analysis
Date & Time: Wednesday, 15 Nov 2017 at 08:30 am – 10:00 am
- Zhenjie Zhang, Advanced Digital Sciences Center, Singapore
- Ruichu Cai, Guangdong University of Technology, China
Causation analysis is one of the most fundamental research topics in machine learning, which aims to identify causal variables linked to the effect variables from a group of sample in the high dimensional space. The result of causation analysis provides the key insight into the target problem domain, and potentially enable new technologies of genetic therapy in genomic domain and predictive maintenance in IoT domain. Different from existing regression and classification algorithms in machine learning by exploiting correlations among variables, e.g., random forest and deep learning, causation analysis is supposed to unveil the complete and accurate structure of causal influence between every pair of variables in the domain.
It raises extremely large challenges to both mathematical model and algorithm design, because of the exponential complexity growth by extending from correlational dependency to causal dependency. In last decade, huge efforts are devoted to a variety of research frontiers of causation analysis, generating interesting and impressive new understandings under completely different assumptions behind the underlying causal structure generation process. In this tutorial, we introduce the theoretical discoveries on new models, review the significance and usefulness of the new approaches, discuss the applicability of new algorithms on real world applications, and address possible future research directions.
Tutorial 1 (T1-b): Deep learning for Biomedicine
Date & Time: Wednesday, 15 Nov 2017 at 10:00 am – 11:30 am
- Truyen Tran, Deakin University, Australia
The ancient board game of Go, once predicted to remain unsolved for decades, is no longer an AI challenge. Equipped with deep learning, the program AlphaGo of DeepMind beaten a human champion 4 to 1. Indeed, deep learning has enjoyed many record-breaking successes in vision, speech and NLP and has helped boost a huge interest in AI from both academia and industry. Perhaps the next most important area for deep learning to conquer is biomedicine. With the obvious benefits to mankind and a huge industry, deep learning for biomedicine has recently attracted a great attention in both industry and academia. While we hold a great optimism for its success, biomedicine is new to deep learning and there are unique challenges yet to be addressed.
This tutorial consists of two parts. Part I briefly covers main ideas behind state-of-the-art deep learning theory and practice. Part II guides practitioners through designing deep architectures for biomedicine to best address the challenges, some unique to the field.
Tutorial 2 (T2): Statistical Relational Artificial Intelligence
Date & Time: Wednesday, 15 Nov 2017 at 12:30 pm – 03:30 pm
- Kristian Kersting, (TU Darmstadt University, Germany)
An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty.
Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations.
This tutorial will provide a gentle introduction into the foundations of statistical relational artificial intelligence, and will realize this by introducing the foundations of logic, of probability, of learning, and their respective combinations.
Tutorial 3 (T3): Machine Learning for Industrial Predictive Analytics
Date & Time: Wednesday, 15 Nov 2017 at 08:30 am – 11:30 am
- Evgeny Burnaev, Skolkovo Institute of Science and Technology, Russia
- Maxim Panov, Skolkovo Institute of Science and Technology, Russia
Approximation problems (also known as regression problems) arise quite often in industrial design, and solutions of such problems are conventionally referred to as surrogate models. The most common application of surrogate modeling in engineering is in connection to engineering optimization for predictive analytics. Indeed, on the one hand, design optimization plays a central role in the industrial design process; on the other hand, a single optimization step typically requires the optimizer to create or refresh a model of the response function whose optimum is sought, to be able to come up with a reasonable next design candidate.
The surrogate models used in optimization range from simple local linear regression employed in the basic gradient-based optimization to complex global models employed in the so-called Surrogate-Based Optimization (SBO). Aside from optimization, surrogate modeling is used in dimension reduction, sensitivity analysis, and for visualization of response functions. In this tutorial we are going to highlight main issues on how to construct and apply surrogate models, describe both state-of-the-art techniques and a few novel approximation algorithms, demonstrate the efficiency of the surrogate modeling methodology on several industrial engineering problems.
Tutorial 4 (T4): Convex Optimization for Decentralized Machine Learning
Date & Time: Wednesday, 15 Nov 2017 at 12:30 pm – 03:30 pm
- Sunghee Yun, Amazon, USA
Convex optimization refers to a special class of mathematical optimization, which also implies the theories and the specific numerical algorithms employed to solve convex optimization problems in practice. All the engineers should be familiar to convex optimization since convex optimization appears in a natural form in many of modern scientific and engineering fields such as computer science, electrical engineering, and many more. In this tutorial, we start from convex optimization, but go beyond.
We’ll touch the decentralized convex optimization technique called alternating direction method of multipliers (ADMM) and its implications in modern decentralized machine learning environment for every increasing data size, i.e., the big data era. Lastly, the four perspectives on deep learning, i.e., statistical perspective, numerical algorithmic perspective, computer scientific perspective, and the performance acceleration using hardware parallelism, will be explored for better understanding of contemporary machine learning and artificial intelligence trends.