Multi-Modal Recommender Systems: Hands-On Exploration

Multi-Modal Recommender Systems: Hands-On Exploration

ANNOUNCEMENT/PRESENTATION

Multi-Modal Recommender Systems: Hands-On Exploration

Tuan, Aghiles, and Hady will be delivering a tutorial at the RecSys-21 conference that will take place in September 2021. The slides and the hands-on materials can be found here.

Abstract

Recommender systems typically learn from user-item preference data such as ratings and clicks. This information is sparse in nature, i.e., observed user-item preferences often represent less than 5% of possible interactions. One promising direction to alleviate data sparsity is to leverage auxiliary information that may encode additional clues on how users consume items. Examples of such data (referred to as modalities) are social networks, item’s descriptive text, product images. The objective of this tutorial is to offer a comprehensive review of recent advances to represent, transform and incorporate the different modalities into recommendation models. Moreover, through practical hands-on sessions, we consider cross model/modality comparisons to investigate the importance of different methods and modalities. The hands-on will be conducted with Cornac, a comparative framework for multimodal recommender systems.

Outline

  1. Brief overview of recommender systems (20 minutes)

  2. Introduction to multimodal recommender systems (20 minutes)

  3. Hands-on: Starting with the Cornac framework (10 minutes)

  4. Exploration into each modality (90 minutes):

  • Text modality
  • Image modality
  • Network modality
  1. Cross-modal utilization (30 minutes)

  2. Future directions (10 minutes)

Target Audience

Introductory to intermediate. We target both practitioners seeking applicable experience, as well as researchers interested in recent and future research directions in multimodal recommender systems.

Prerequisites

Basic knowledge of Python, machine learning and recommender systems.

Speakers