Simon Foucart
Mathematical Pictures at a Data Science Exhibition
Simon Foucart
Mathematical Pictures at a Data Science Exhibition
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
A diverse selection of data science topics explored through a mathematical lens.
Andere Kunden interessierten sich auch für
- Leslie MundayAnalysis Through Pictures43,99 €
- Jean Bernard LasserreThe Christoffel-Darboux Kernel for Data Analysis65,99 €
- Symmetrical and Asymmetrical Distributions in Statistics and Data Science106,99 €
- Tamal K. DeyCurve and Surface Reconstruction110,99 €
- Oded GoldreichP, NP, and NP-Completeness126,99 €
- Jean-Daniel BoissonnatGeometric and Topological Inference102,99 €
- Arieh Iserles (ed.)Acta Numerica 1999182,99 €
-
-
-
A diverse selection of data science topics explored through a mathematical lens.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Cambridge University Press
- Seitenzahl: 340
- Erscheinungstermin: 29. März 2022
- Englisch
- Abmessung: 235mm x 157mm x 23mm
- Gewicht: 644g
- ISBN-13: 9781316518885
- ISBN-10: 1316518884
- Artikelnr.: 63264756
- Verlag: Cambridge University Press
- Seitenzahl: 340
- Erscheinungstermin: 29. März 2022
- Englisch
- Abmessung: 235mm x 157mm x 23mm
- Gewicht: 644g
- ISBN-13: 9781316518885
- ISBN-10: 1316518884
- Artikelnr.: 63264756
Simon Foucart is Professor of Mathematics at Texas A&M University, where he was named Presidential Impact Fellow in 2019. He has previously written, together with Holger Rauhut, the influential book A Mathematical Introduction to Compressive Sensing (2013).
Part I. Machine Learning: 1. Rudiments of Statistical Learning
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
Part I. Machine Learning: 1. Rudiments of Statistical Learning
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.
2. Vapnik-Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization: 19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks: 24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory.