This book is mainly about the evaluation of adaptive recommender systems. Recommendation is the action, for an intelligent system, to supply a user of an app with personalized content so as to enhance what is referred to as "user experience" e.g. recommending a product on a merchant website or an article on a blog. Many applications that are of interest to us generate a huge amount of data through their millions of online users. Nevertheless, using this data to evaluate a new recommendation technique, and in particular one that learns online (adaptive) is far from trivial. Some approaches have been proposed but they were not studied thoroughly both from a theoretical point of view and from an empirical one. In this work we start by filling blanks within the theoretical analysis. Then we comment on the results of an experiment of unprecedented scale in this area: a public challenge we organized. This challenge along with some complementary experiments revealed an unexpected and tremendous source of bias: time acceleration. The rest of this work tackles this issue. We show that a bootstrap-based approach allows to significantly reduce this bias and more importantly to control it.