Schedule

Schedule

The following schedule is based on Pacific Standard Time (PST).

  • 09:30 AM - 10:15 AM: Invited Talk: Prof. Gitta Kutyniok, Ludwig Maximilian University of Munich, CartoonX: Using information theory to reveal the reason for (wrong) decisions by DNNs.

  • 10:15 AM - 10:30 AM: Oral: Quantization for Distributed Optimization

  • 10:30 AM - 11:15 AM: Invited Talk: Prof. Jose Dolz, ETS Montreal, The role of the Shannon entropy as a regularizer of deep neural networks

  • 11:15 AM - 11:30 AM: Oral: Self-Supervised Robust Scene Flow Estimation via the Alignment of Probability Density Functions

  • 11:30 AM - 12:15 PM: Invited Talk: Prof. Abdellatif Zaidi, Université Paris-Est Marne la Vallée, Learning and Inference over Networks, Information-Theoretic Approaches, Architectures and Algorithms

  • 12:15 PM - 12:30 PM: Oral: Multi-Source Domain Adaptation with von Neumann Conditional Divergence

  • 12:30 AM - 01:00 PM: Poster Session

          • Model2Detector:Widening the Information Bottleneck for Out-of-Distribution Detection using a Handful of Gradient Steps

          • Generative-Contrastive Learning for Self-Supervised Latent Representations of 3D Shapes from Multi-Modal Euclidean Input

          • Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis

          • Robust and Discriminative Deep Transfer Learning Scheme for EEG-Based Motor Imagery Classification

  • 01:00 PM - 01:45 PM: Invited Talk: Prof. Alireza Makhzani, Vector Institute for Artificial Intelligence; University of Toronto, Improving Mutual Information Estimation with Annealed and Energy-Based Bounds.

  • 01:45 PM - 02:00 PM: Oral: Neural Divergence Estimation Between Sets of Samples

  • 02:00 PM - 02:45 PM: Invited Talk: Prof. Jose C. Principe, University of Florida, Review of Measures and Estimators of Statistical Dependence.

  • 02:45 PM - 03:15 PM: Oral: Information Theoretic Structured Generative Modeling

  • 03:15 PM - 03:30 PM: Oral: Deep Clustering with the Cauchy-Schwarz Divergence

Accepted papers


  • Model2Detector: Widening the Information Bottleneck for Out-of-Distribution Detection using a Handful of Gradient Steps, Sumedh Sontakke, Buvaneswari Ramanan, Laurent Itti, and Thomas Woo

  • Generative-Contrastive Learning for Self-Supervised Latent Representations of 3D Shapes from Multi-Modal Euclidean Input, Chengzhi Wu, Mingyuan Zhou, Julius Pfrommer, and Jürgen Beyerer.

  • Neural Divergence Estimation Between Sets of Samples, Kira Selby, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, and Pascal Poupart.

  • Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis, Yufeng Shi, Shuhuang Chen, Xinge You, Qinmu Peng, Weihua Ou, and Yue Zhao

  • Quantization for Distributed Optimization, S Vineeth.

  • Deep Clustering with the Cauchy-Schwarz Divergence, Daniel J. Trosten, Kristoffer Wickstrøm, Shujian Yu, Sigurd Løkse, Robert Jenssen and Michael Kampffmeyer.

  • Robust and Discriminative Deep Transfer Learning Scheme for EEG-Based Motor Imagery Classification, Xiuyu Huang, Nan Zhou, Badong Chen and Kup-Sze Choi.

  • Multi-Source Domain Adaptation with von Neumann Conditional Divergence, Ammar Shaker

  • Self-Supervised Robust Scene Flow Estimation via the Alignment of Probability Density Functions, Pan He, Patrick Emami, Sanjay Ranka and Anand Rangarajan