This text explores a diverse set of data science topics through a mathematical lens, helping mathematicians become acquainted with data science in general, and machine learning, optimal recovery, compressive sensing, optimization, and neural networks in particular. It will also be valuable to data scientists seeking mathematical sophistication.
This text explores a diverse set of data science topics through a mathematical lens, helping mathematicians become acquainted with data science in general, and machine learning, optimal recovery, compressive sensing, optimization, and neural networks in particular. It will also be valuable to data scientists seeking mathematical sophistication.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Simon Foucart is Professor of Mathematics at Texas A&M University, where he was named Presidential Impact Fellow in 2019. He has previously written, together with Holger Rauhut, the influential book A Mathematical Introduction to Compressive Sensing (2013).
Inhaltsangabe
Part I. Machine Learning: 1. Rudiments of Statistical Learning 2. Vapnik-Chervonenkis Dimension 3. Learnability for Binary Classification 4. Support Vector Machines 5. Reproducing Kernel Hilbert 6. Regression and Regularization 7. Clustering 8. Dimension Reduction Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery 10. Approximability Models 11. Ideal Selection of Observation Schemes 12. Curse of Dimensionality 13. Quasi-Monte Carlo Integration Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations 15. The Complexity of Sparse Recovery 16. Low-Rank Recovery from Linear Observations 17. Sparse Recovery from One-Bit Observations 18. Group Testing Part IV Optimization: 19. Basic Convex Optimization 20. Snippets of Linear Programming 21. Duality Theory and Practice 22. Semidefinite Programming in Action 23. Instances of Nonconvex Optimization Part V Neural Networks: 24. First Encounter with ReLU Networks 25. Expressiveness of Shallow Networks 26. Various Advantages of Depth 27. Tidbits on Neural Network Training Appendix A High-Dimensional Geometry Appendix B. Probability Theory Appendix C. Functional Analysis Appendix D. Matrix Analysis Appendix E. Approximation Theory.
Part I. Machine Learning: 1. Rudiments of Statistical Learning 2. Vapnik-Chervonenkis Dimension 3. Learnability for Binary Classification 4. Support Vector Machines 5. Reproducing Kernel Hilbert 6. Regression and Regularization 7. Clustering 8. Dimension Reduction Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery 10. Approximability Models 11. Ideal Selection of Observation Schemes 12. Curse of Dimensionality 13. Quasi-Monte Carlo Integration Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations 15. The Complexity of Sparse Recovery 16. Low-Rank Recovery from Linear Observations 17. Sparse Recovery from One-Bit Observations 18. Group Testing Part IV Optimization: 19. Basic Convex Optimization 20. Snippets of Linear Programming 21. Duality Theory and Practice 22. Semidefinite Programming in Action 23. Instances of Nonconvex Optimization Part V Neural Networks: 24. First Encounter with ReLU Networks 25. Expressiveness of Shallow Networks 26. Various Advantages of Depth 27. Tidbits on Neural Network Training Appendix A High-Dimensional Geometry Appendix B. Probability Theory Appendix C. Functional Analysis Appendix D. Matrix Analysis Appendix E. Approximation Theory.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826