BEARS: Towards an evaluation framework for bandit-based interactive recommender systems
Barraza-Urbina, Andrea ; Koutrika, Georgia ; d'Aquin, Mathieu, ; Hayes, Conor
Barraza-Urbina, Andrea
Koutrika, Georgia
d'Aquin, Mathieu,
Hayes, Conor
Loading...
Identifiers
Publication Date
2018-10-06
Type
Conference Paper
Downloads
Citation
Barraza-Urbina, Andrea , Koutrika, Georgia , d‘Aquin, Mathieu , & Hayes, Conor (2018). BEARS: Towards an evaluation framework for bandit-based interactive recommender systems. Paper presented at the REVEAL’18, Vancouver, Canada, 06-07 October, DOI: 10.13025/x72s-8r20
Abstract
Recommender Systems (RS) deployed in fast-paced dynamic scenarios must quickly learn to adapt in response to user evaluative feedback. In these settings, the RS faces an online learning problem where each decision should optimize two competing goals: gather new information about users and optimally serve users according to acquired knowledge. Related works commonly address this exploration-exploitation trade-off by proposing bandit-based RS. However, evaluating bandit-based RS in an offline interactive environment remains an open challenge. This paper presents BEARS, an evaluation framework that allows users to easily test bandit-based RS solutions. BEARS aims to support reproducible offline evaluations by providing simple building blocks for constructing experiments in a shared platform. Moreover, BEARS can be used to share benchmark problem settings (Environments) and reusable implementations of baseline solution approaches (RS Agents).
Publisher
NUI Galway
Publisher DOI
Rights
Attribution-NonCommercial-NoDerivs 3.0 Ireland