To navigate to a specific project, hover over the above `projects’ tab.
For most of our daily decisions today – which product to purchase, which restaurant to go, what to eat – we face an increasingly large number of options. One means to simplify and improve user experience is to offer personalized recommendations, narrowing down the options to those of utmost interest to the user. The challenge is learning user preferences from the sparse behavioral signals.
With the greater interactivity of various applications that we encounter online, we are now expressing our preferences in a variety of modalities. For instance, in addition to numerical ratings that indicate the degree to which we like something, we also write textual reviews to expand on the different facets of our experiences. The ubiquity of smartphones (essentially mobile cameras) means that visually inclined users finds it convenient to express their preferences using images. Moreover, we are the company we keep, and similarity in preferences tends to be found in social networks. Our thrust is therefore on incorporating and integrating multi-modal preference signals in recommendations.
Because the notion of multi-modality pervades the whole expanse of a recommendation framework, our goal is to build an end-to-end recommendation framework involving multiple components. Data Infrastructure & Representation Learning enables systematic and periodic data collection, integration, and feature extraction. Preference Learning Algorithms encapsulate both libraries as well as pretrained models. Recommendation Retrieval Engine facilitates delivery of recommendations in a user-centric and computationally efficient manner. This framework involves specific projects focusing on specific tasks and deliverables as we list below.