41,95 €
41,95 €
inkl. MwSt.
Sofort per Download lieferbar
21 °P sammeln
41,95 €
Als Download kaufen
41,95 €
inkl. MwSt.
Sofort per Download lieferbar
21 °P sammeln
Jetzt verschenken
Alle Infos zum eBook verschenken
41,95 €
inkl. MwSt.
Sofort per Download lieferbar
Alle Infos zum eBook verschenken
21 °P sammeln
- Format: PDF
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
A comprehensive and rigorous introduction for graduate students and researchers, with applications in sequential decision-making problems.
- Geräte: PC
- mit Kopierschutz
- eBook Hilfe
- Größe: 6.61MB
Andere Kunden interessierten sich auch für
- Jeff Edmonds (York UniversityHow to Think about Algorithms (eBook, PDF)38,49 €
- Sean MoriarityGenetic Algorithms in Elixir (eBook, PDF)24,95 €
- Naoki MasudaGillespie Algorithms for Stochastic Multiagent Dynamics in Populations and Networks (eBook, PDF)16,95 €
- Chirag ShahHands-On Introduction to Machine Learning (eBook, PDF)45,95 €
- Pablo DuboueArt of Feature Engineering (eBook, PDF)41,95 €
- Yining ShiJumpstarting the Arduino 101 (eBook, PDF)7,99 €
- Anirudh KoulPractical Deep Learning for Cloud, Mobile, and Edge (eBook, PDF)43,95 €
-
-
-
A comprehensive and rigorous introduction for graduate students and researchers, with applications in sequential decision-making problems.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Cambridge University Press
- Erscheinungstermin: 16. Juli 2020
- Englisch
- ISBN-13: 9781108687492
- Artikelnr.: 70910799
- Verlag: Cambridge University Press
- Erscheinungstermin: 16. Juli 2020
- Englisch
- ISBN-13: 9781108687492
- Artikelnr.: 70910799
Tor Lattimore is a research scientist at DeepMind. His research is focused on decision making in the face of uncertainty, including bandit algorithms and reinforcement learning. Before joining DeepMind he was an assistant professor at Indiana University and a postdoctoral fellow at the University of Alberta.
1. Introduction
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm: asymptotic optimality
9. The upper confidence bound algorithm: minimax optimality
10. The upper confidence bound algorithm: Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds: basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm: asymptotic optimality
9. The upper confidence bound algorithm: minimax optimality
10. The upper confidence bound algorithm: Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds: basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.
1. Introduction
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm: asymptotic optimality
9. The upper confidence bound algorithm: minimax optimality
10. The upper confidence bound algorithm: Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds: basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm: asymptotic optimality
9. The upper confidence bound algorithm: minimax optimality
10. The upper confidence bound algorithm: Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds: basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.