The AAAI-22 Workshop on Information Theory for Deep Learning (IT4DL)

Workshop Summary

In recent years, Information Theory Learning (ITL) is exploiting the remarkable advantages of information theoretic methods in solving various deep learning problems. Despite the great success of deep neural networks (DNNs) in many artificial intelligence tasks they suffer from limitations, such as poor generalization behavior for out-of-distribution (OOD) data, vulnerability to adversarial examples, and the “black-box” nature of DNNs, which obscures understanding of their inner representations and decision-making process. Furthermore, DNNs are data greedy in the context of supervised learning, and not well developed for limited label learning, for instance for semi-supervised learning, self-supervised learning, or unsupervised learning.

The practicality of the notion of information and the solid mathematics of information theory have demonstrated great potential to solve various deep learning problems: 1) Design robust and non-metric loss functions (e.g. Minimum Error Entropy Criterion) for network training; 2) Information theoretic frameworks (e.g. Information Bottleneck) to explain the generalization behavior of DNNs or improve their adversarial robustness and OOD generalization; 3) Incorporate information theory to learn causal representations, quantify uncertainty, and optimize the value of information in abstract tasks such as the exploitation-exploration dilemma in reinforcement learning.

With the recent rapid development of advanced techniques on the intersection between information theory and machine learning, such as neural network-based mutual information estimators, deep generative models and causal representation learning, domain adaptation and generalization, and deep reinforcement learning, we believe information theoretic methods can provide new perspectives, theories and methods to the challenging problems of deep learning on the central issues of generalization, robustness, and explainability.

Organizers

  • Jose C. Principe (University of Florida)

  • Robert Jenssen (UiT The Arctic University of Norway)

  • Badong Chen (Xi'an Jiaotong University)

  • Shujian Yu (UiT The Arctic University of Norway, main contact: yusj9011@gmail.com)

Topics of Interest

This workshop aims to bring together both academic researchers and industrial practitioners to share visions on the intersection between information theory and deep learning. Topics of interest include but are not limited to:

  • Estimation of information theoretic quantities from data

  • Information theoretic learning principles and their implementations for the generalization and robustness of deep neural networks

  • Interpretation and explanation of deep neural neworks with information-theoretic methods

  • Information theoretic methods for domain adaptation, out-of-domain generalization and relevant problems (such as robust transfer learning and lifelong learning)

  • Information theoretic methods for learning from limited labelled data, such as few-shot learning, zero-shot learning, self-supervised learning, and unsupervised learning

  • Information theoretic methods in generative models and causal representation learning

  • Information theoretic methods for distributed deep learning

  • Information theoretic methods for (deep) reinforcement learning

  • Information theoretic methods for uncertainty quantification

  • Information theoretic methods for multi-view, multi-task and general AI models

Important Dates

  • Submission deadline: November 12 (will be extended for 2 weeks).

  • Notification date: December 3.

  • Workshop date: February 28.

Accepted Papers

  • Model2Detector: Widening the Information Bottleneck for Out-of-Distribution Detection using a Handful of Gradient Steps, Sumedh Sontakke, Buvaneswari Ramanan, Laurent Itti, and Thomas Woo

  • Generative-Contrastive Learning for Self-Supervised Latent Representations of 3D Shapes from Multi-Modal Euclidean Input, Chengzhi Wu, Mingyuan Zhou, Julius Pfrommer, and Jürgen Beyerer.

  • Neural Divergence Estimation Between Sets of Samples, Kira Selby, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, and Pascal Poupart.

  • Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis, Yufeng Shi, Shuhuang Chen, Xinge You, Qinmu Peng, Weihua Ou, and Yue Zhao

  • Quantization for Distributed Optimization, S Vineeth.

  • Deep Clustering with the Cauchy-Schwarz Divergence, Daniel J. Trosten, Kristoffer Wickstrøm, Shujian Yu, Sigurd Løkse, Robert Jenssen and Michael Kampffmeyer.

  • Robust and Discriminative Deep Transfer Learning Scheme for EEG-Based Motor Imagery Classification, Xiuyu Huang, Nan Zhou, Badong Chen and Kup-Sze Choi.

  • Multi-Source Domain Adaptation with von Neumann Conditional Divergence, Ammar Shaker

  • Self-Supervised Robust Scene Flow Estimation via the Alignment of Probability Density Functions, Pan He, Patrick Emami, Sanjay Ranka and Anand Rangarajan