An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach

Vortrag von Dr. Florian Wilhelm auf der 35th International FLAIRS Conference

Abstract:

With the increasing use of AI and ML-based systems, interpretability is becoming an increasingly important issue to ensure user trust and safety. This also applies to the area of recommender systems, where methods based on matrix factorization (MF) are among the most popular methods for collaborative filtering tasks with implicit feedback. Despite their simplicity, the latent factors of users and items lack interpretability in the case of the effective, unconstrained MF-based methods. In this work, we propose an extended latent Dirichlet Allocation model (LDAext) that has interpretable parameters such as user cohorts of item preferences and the affiliation of a user with different cohorts. We prove a theorem on how to transform the factors of an unconstrained MF model into the parameters of LDAext. Using this theoretical connection, we train an MF model on different real-world data sets, transform the latent factors into the parameters of LDAext and test their interpretation in several experiments for plausibility. Our experiments confirm the interpretability of the transformed parameters and thus demonstrate the usefulness of our proposed approach.

Veranstaltung: The 35th  International FLAIRS Conference

Datum des Vortrags: 16.05.2022

Über den Speaker: Dr. Florian Wilhelm ist Mathematiker und Data Scientist, angetrieben durch das Ziel, Data-driven Services beim Kunden umzusetzen und dort Mehrwerte zu schaffen. Er arbeitet als Head of Data Science bei der inovex GmbH.

Repos: https://github.com/FlorianWilhelm/lda4rec

Wie können wir Sie unterstützen?

Florian Wilhelm

Head of Data Science, Ansprechperson Data Management & Analytics