Dirk P. Kroese, Zdravko Botev, Thomas Taimre, Radislav Vaisman
Data Science and Machine Learning (eBook, PDF)
Mathematical and Statistical Methods
90,95 €
90,95 €
inkl. MwSt.
Sofort per Download lieferbar
45 °P sammeln
90,95 €
Als Download kaufen
90,95 €
inkl. MwSt.
Sofort per Download lieferbar
45 °P sammeln
Jetzt verschenken
Alle Infos zum eBook verschenken
90,95 €
inkl. MwSt.
Sofort per Download lieferbar
Alle Infos zum eBook verschenken
45 °P sammeln
Dirk P. Kroese, Zdravko Botev, Thomas Taimre, Radislav Vaisman
Data Science and Machine Learning (eBook, PDF)
Mathematical and Statistical Methods
- Format: PDF
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
The purpose of this book is to provide an accessible, yet comprehensive, account of data science and machine learning. It is intended for anyone interested in gaining a better understanding of the mathematics and statistics that underpin the rich variety of ideas and machine learning algorithms in data science.
- Geräte: PC
- ohne Kopierschutz
- eBook Hilfe
- Größe: 30.01MB
Andere Kunden interessierten sich auch für
- Carson SievertInteractive Web-Based Data Visualization with R, plotly, and shiny (eBook, PDF)72,95 €
- Basilio de Braganca PereiraStatistical Learning Using Neural Networks (eBook, PDF)51,95 €
- Benjamin S. BaumerModern Data Science with R (eBook, PDF)89,95 €
- Dothang TruongData Science and Machine Learning for Non-Programmers (eBook, PDF)47,95 €
- Silvelyn ZwanzigComputer Intensive Methods in Statistics (eBook, PDF)57,95 €
- Paul D. McNicholasData Science with Julia (eBook, PDF)56,95 €
- Wendy L. MartinezComputational Statistics Handbook with MATLAB (eBook, PDF)48,95 €
-
-
-
The purpose of this book is to provide an accessible, yet comprehensive, account of data science and machine learning. It is intended for anyone interested in gaining a better understanding of the mathematics and statistics that underpin the rich variety of ideas and machine learning algorithms in data science.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis
- Seitenzahl: 532
- Erscheinungstermin: 20. November 2019
- Englisch
- ISBN-13: 9781000730777
- Artikelnr.: 58278955
- Verlag: Taylor & Francis
- Seitenzahl: 532
- Erscheinungstermin: 20. November 2019
- Englisch
- ISBN-13: 9781000730777
- Artikelnr.: 58278955
Dirk P. Kroese, PhD, is a Professor of Mathematics and Statistics at The University of Queensland. He has published over 120 articles and five books in a wide range of areas in mathematics, statistics, data science, machine learning, and Monte Carlo methods. He is a pioneer of the well-known Cross-Entropy method-an adaptive Monte Carlo technique, which is being used around the world to help solve difficult estimation and optimization problems in science, engineering, and finance.
Zdravko Botev, PhD, is an Australian Mathematical Science Institute Lecturer in Data Science and Machine Learning with an appointment at the University of New South Wales in Sydney, Australia. He is the recipient of the 2018 Christopher Heyde Medal of the Australian Academy of Science for distinguished research in the Mathematical Sciences.
Thomas Taimre, PhD, is a Senior Lecturer of Mathematics and Statistics at The University of Queensland.
His research interests range from applied probability and Monte Carlo methods to applied physics and the remarkably universal self-mixing effect in lasers. He has published over 100 articles, holds a patent, and is the coauthor of Handbook of Monte Carlo Methods (Wiley).
Radislav Vaisman, PhD, is a Lecturer of Mathematics and Statistics at The University of Queensland. His research interests lie at the intersection of applied probability, machine learning, and computer science. He has published over 20 articles and two books.
Zdravko Botev, PhD, is an Australian Mathematical Science Institute Lecturer in Data Science and Machine Learning with an appointment at the University of New South Wales in Sydney, Australia. He is the recipient of the 2018 Christopher Heyde Medal of the Australian Academy of Science for distinguished research in the Mathematical Sciences.
Thomas Taimre, PhD, is a Senior Lecturer of Mathematics and Statistics at The University of Queensland.
His research interests range from applied probability and Monte Carlo methods to applied physics and the remarkably universal self-mixing effect in lasers. He has published over 100 articles, holds a patent, and is the coauthor of Handbook of Monte Carlo Methods (Wiley).
Radislav Vaisman, PhD, is a Lecturer of Mathematics and Statistics at The University of Queensland. His research interests lie at the intersection of applied probability, machine learning, and computer science. He has published over 20 articles and two books.
Preface Notation 1. Importing, Summarizing, and Visualizing Data 1.1
Introduction 1.2 Structuring Features According to Type 1.3 Summary Tables
1.4 Summary Statistics 1.5 Visualizing Data 1.5.1 Plotting Qualitative
Variables 1.5.2 Plotting Quantitative Variables 1.5.3 Data Visualization in
a Bivariate Setting Exercises 2. Statistical Learning 2.1 Introduction 2.2
Supervised and Unsupervised Learning 2.3 Training and Test Loss 2.4
Tradeoffs in Statistical Learning 2.5 Estimating Risk 2.5.1 In-Sample Risk
2.5.2 Cross-Validation 2.6 Modeling Data 2.7 Multivariate Normal Models 2.8
Normal Linear Models 2.9 Bayesian Learning Exercises 3. Monte Carlo Methods
3.1 Introduction 3.2 Monte Carlo Sampling 3.2.1 Generating Random Numbers
3.2.2 Simulating Random Variables 3.2.3 Simulating Random Vectors and
Processes 3.2.4 Resampling 3.2.5 Markov Chain Monte Carlo 3.3 Monte Carlo
Estimation 3.3.1 Crude Monte Carlo 3.3.2 Bootstrap Method 3.3.3 Variance
Reduction 3.4 Monte Carlo for Optimization 3.4.1 Simulated Annealing 3.4.2
Cross-Entropy Method 3.4.3 Splitting for Optimization3.4.4 Noisy
Optimization Exercises 4. Unsupervised Learning 4.1 Introduction 4.2 Risk
and Loss in Unsupervised Learning 4.3 Expectation-Maximization (EM)
Algorithm 4.4 Empirical Distribution and Density Estimation 4.5 Clustering
via Mixture Models 4.5.1 Mixture Models 4.5.2 EM Algorithm for Mixture
Models 4.6 Clustering via Vector Quantization 4.6.1 K-Means 4.6.2
Clustering via Continuous Multiextremal Optimization 4.7 Hierarchical
Clustering 4.8 Principal Component Analysis (PCA) 4.8.1 Motivation:
Principal Axes of an Ellipsoid 4.8.2 PCA and Singular Value Decomposition
(SVD) Exercises 5. Regression 5.1 Introduction 5.2 Linear Regression 5.3
Analysis via Linear Models 5.3.1 Parameter Estimation 5.3.2 Model Selection
and Prediction 5.3.3 Cross-Validation and Predictive Residual Sum of
Squares 5.3.4 In-Sample Risk and Akaike Information Criterion 5.3.5
Categorical Features 5.3.6 Nested Models 5.3.7 Coefficient of Determination
5.4 Inference for Normal Linear Models 5.4.1 Comparing Two Normal Linear
Models 5.4.2 Confidence and Prediction Intervals 5.5 Nonlinear Regression
Models 5.6 Linear Models in Python 5.6.1 Modeling 5.6.2 Analysis 5.6.3
Analysis of Variance (ANOVA) 5.6.4 Confidence and Prediction Intervals
5.6.5 Model Validation 5.6.6 Variable Selection 5.7 Generalized Linear
Models Exercises 6. Regularization and Kernel Methods 6.1 Introduction 6.2
Regularization 6.3 Reproducing Kernel Hilbert Spaces 6.4 Construction of
Reproducing Kernels 6.4.1 Reproducing Kernels via Feature Mapping 6.4.2
Kernels from Characteristic Functions 6.4.3 Reproducing Kernels Using
Orthonormal Features 6.4.4 Kernels from Kernels 6.5 Representer Theorem 6.6
Smoothing Cubic Splines 6.7 Gaussian Process Regression 6.8 Kernel PCA
Exercises 7. Classification 7.1 Introduction 7.2 Classification Metrics 7.3
Classification via Bayes' Rule 7.4 Linear and Quadratic Discriminant
Analysis 7.5 Logistic Regression and Softmax Classification 7.6 K-nearest
Neighbors Classification 7.7 Support Vector Machine 7.8 Classification with
Scikit-Learn Exercises 8. Decision Trees and Ensemble Methods 8.1
Introduction 8.2 Top-Down Construction of Decision Trees 8.2.1 Regional
Prediction Functions 8.2.2 Splitting Rules 8.2.3 Termination Criterion
8.2.4 Basic Implementation 8.3 Additional Considerations 8.3.1 Binary
Versus Non-Binary Trees 8.3.2 Data Preprocessing 8.3.3 Alternative
Splitting Rules 8.3.4 Categorical Variables 8.3.5 Missing Values 8.4
Controlling the Tree Shape 8.4.1 Cost-Complexity Pruning 8.4.2 Advantages
and Limitations of Decision Trees 8.5 Bootstrap Aggregation 8.6 Random
Forests 8.7 Boosting Exercises 9. Deep Learning 9.1 Introduction 9.2
Feed-Forward Neural Networks 9.3 Back-Propagation 9.4 Methods for Training
9.4.1 Steepest Descent 9.4.2 Levenberg-Marquardt Method 9.4.3
Limited-Memory BFGS Method 9.4.4 Adaptive Gradient Methods 9.5 Examples in
Python 9.5.1 Simple Polynomial Regression 9.5.2 Image Classification
Exercises A. Linear Algebra and Functional Analysis A.1 Vector Spaces,
Bases, and Matrices A.2 Inner Product A.3 Complex Vectors and Matrices A.4
Orthogonal Projections A.5 Eigenvalues and Eigenvectors A.5.1 Left- and
Right-Eigenvectors A.6 Matrix Decompositions A.6.1 (P)LU Decomposition
A.6.2 Woodbury Identity A.6.3 Cholesky Decomposition A.6.4 QR Decomposition
and the Gram-Schmidt Procedure A.6.5 Singular Value Decomposition A.6.6
Solving Structured Matrix Equations A.7 Functional Analysis A.8 Fourier
Transforms A.8.1 Discrete Fourier Transform A.8.2 Fast Fourier Transform
B. Multivariate Differentiation and Optimization B.1 Multivariate
Differentiation B.1.1 Taylor Expansion B.1.2 Chain Rule B.2 Optimization
Theory B.2.1 Convexity and Optimization B.2.2 Lagrangian Method B.2.3
Duality B.3 Numerical Root-Finding and Minimization B.3.1 Newton-Like
Methods B.3.2 Quasi-Newton Methods B.3.3 Normal Approximation Method B.3.4
Nonlinear Least Squares B.4 Constrained Minimization via Penalty Functions
C. Probability and Statistics C.1 Random Experiments and Probability Spaces
C.2 Random Variables and Probability Distributions C.3 Expectation C.4
Joint Distributions C.5 Conditioning and Independence C.5.1 Conditional
Probability C.5.2 Independence C.5.3 Expectation and Covariance C.5.4
Conditional Density and Conditional Expectation C.6 Functions of Random
Variables C.7 Multivariate Normal Distribution C.8 Convergence of Random
Variables C.9 Law of Large Numbers and Central Limit Theorem C.10 Markov
Chains C.11 Statistics C.12 Estimation C.12.1 Method of Moments C.12.2
Maximum Likelihood Method C.13 Confidence Intervals C.14 Hypothesis Testing
D. Python Primer D.1 Getting Started D.2 Python Objects D.3 Types and
Operators D.4 Functions and Methods D.5 Modules D.6 Flow Control D.7
Iteration D.8 Classes D.9 Files D.10 NumPy D.10.1 Creating and Shaping
Arrays D.10.2 Slicing D.10.3 Array Operations D.10.4 Random Numbers D.11
Matplotlib D.11.1 Creating a Basic Plot D.12 Pandas D.12.1 Series and
DataFrame D.12.2 Manipulating Data Frames D.12.3 Extracting Information
D.12.4 Plotting D.13 Scikit-learn D.13.1 Partitioning the Data D.13.2
Standardization D.13.3 Fitting and Prediction D.13.4 Testing the Model D.14
System Calls, URL Access, and Speed-Up Bibliography Index
Introduction 1.2 Structuring Features According to Type 1.3 Summary Tables
1.4 Summary Statistics 1.5 Visualizing Data 1.5.1 Plotting Qualitative
Variables 1.5.2 Plotting Quantitative Variables 1.5.3 Data Visualization in
a Bivariate Setting Exercises 2. Statistical Learning 2.1 Introduction 2.2
Supervised and Unsupervised Learning 2.3 Training and Test Loss 2.4
Tradeoffs in Statistical Learning 2.5 Estimating Risk 2.5.1 In-Sample Risk
2.5.2 Cross-Validation 2.6 Modeling Data 2.7 Multivariate Normal Models 2.8
Normal Linear Models 2.9 Bayesian Learning Exercises 3. Monte Carlo Methods
3.1 Introduction 3.2 Monte Carlo Sampling 3.2.1 Generating Random Numbers
3.2.2 Simulating Random Variables 3.2.3 Simulating Random Vectors and
Processes 3.2.4 Resampling 3.2.5 Markov Chain Monte Carlo 3.3 Monte Carlo
Estimation 3.3.1 Crude Monte Carlo 3.3.2 Bootstrap Method 3.3.3 Variance
Reduction 3.4 Monte Carlo for Optimization 3.4.1 Simulated Annealing 3.4.2
Cross-Entropy Method 3.4.3 Splitting for Optimization3.4.4 Noisy
Optimization Exercises 4. Unsupervised Learning 4.1 Introduction 4.2 Risk
and Loss in Unsupervised Learning 4.3 Expectation-Maximization (EM)
Algorithm 4.4 Empirical Distribution and Density Estimation 4.5 Clustering
via Mixture Models 4.5.1 Mixture Models 4.5.2 EM Algorithm for Mixture
Models 4.6 Clustering via Vector Quantization 4.6.1 K-Means 4.6.2
Clustering via Continuous Multiextremal Optimization 4.7 Hierarchical
Clustering 4.8 Principal Component Analysis (PCA) 4.8.1 Motivation:
Principal Axes of an Ellipsoid 4.8.2 PCA and Singular Value Decomposition
(SVD) Exercises 5. Regression 5.1 Introduction 5.2 Linear Regression 5.3
Analysis via Linear Models 5.3.1 Parameter Estimation 5.3.2 Model Selection
and Prediction 5.3.3 Cross-Validation and Predictive Residual Sum of
Squares 5.3.4 In-Sample Risk and Akaike Information Criterion 5.3.5
Categorical Features 5.3.6 Nested Models 5.3.7 Coefficient of Determination
5.4 Inference for Normal Linear Models 5.4.1 Comparing Two Normal Linear
Models 5.4.2 Confidence and Prediction Intervals 5.5 Nonlinear Regression
Models 5.6 Linear Models in Python 5.6.1 Modeling 5.6.2 Analysis 5.6.3
Analysis of Variance (ANOVA) 5.6.4 Confidence and Prediction Intervals
5.6.5 Model Validation 5.6.6 Variable Selection 5.7 Generalized Linear
Models Exercises 6. Regularization and Kernel Methods 6.1 Introduction 6.2
Regularization 6.3 Reproducing Kernel Hilbert Spaces 6.4 Construction of
Reproducing Kernels 6.4.1 Reproducing Kernels via Feature Mapping 6.4.2
Kernels from Characteristic Functions 6.4.3 Reproducing Kernels Using
Orthonormal Features 6.4.4 Kernels from Kernels 6.5 Representer Theorem 6.6
Smoothing Cubic Splines 6.7 Gaussian Process Regression 6.8 Kernel PCA
Exercises 7. Classification 7.1 Introduction 7.2 Classification Metrics 7.3
Classification via Bayes' Rule 7.4 Linear and Quadratic Discriminant
Analysis 7.5 Logistic Regression and Softmax Classification 7.6 K-nearest
Neighbors Classification 7.7 Support Vector Machine 7.8 Classification with
Scikit-Learn Exercises 8. Decision Trees and Ensemble Methods 8.1
Introduction 8.2 Top-Down Construction of Decision Trees 8.2.1 Regional
Prediction Functions 8.2.2 Splitting Rules 8.2.3 Termination Criterion
8.2.4 Basic Implementation 8.3 Additional Considerations 8.3.1 Binary
Versus Non-Binary Trees 8.3.2 Data Preprocessing 8.3.3 Alternative
Splitting Rules 8.3.4 Categorical Variables 8.3.5 Missing Values 8.4
Controlling the Tree Shape 8.4.1 Cost-Complexity Pruning 8.4.2 Advantages
and Limitations of Decision Trees 8.5 Bootstrap Aggregation 8.6 Random
Forests 8.7 Boosting Exercises 9. Deep Learning 9.1 Introduction 9.2
Feed-Forward Neural Networks 9.3 Back-Propagation 9.4 Methods for Training
9.4.1 Steepest Descent 9.4.2 Levenberg-Marquardt Method 9.4.3
Limited-Memory BFGS Method 9.4.4 Adaptive Gradient Methods 9.5 Examples in
Python 9.5.1 Simple Polynomial Regression 9.5.2 Image Classification
Exercises A. Linear Algebra and Functional Analysis A.1 Vector Spaces,
Bases, and Matrices A.2 Inner Product A.3 Complex Vectors and Matrices A.4
Orthogonal Projections A.5 Eigenvalues and Eigenvectors A.5.1 Left- and
Right-Eigenvectors A.6 Matrix Decompositions A.6.1 (P)LU Decomposition
A.6.2 Woodbury Identity A.6.3 Cholesky Decomposition A.6.4 QR Decomposition
and the Gram-Schmidt Procedure A.6.5 Singular Value Decomposition A.6.6
Solving Structured Matrix Equations A.7 Functional Analysis A.8 Fourier
Transforms A.8.1 Discrete Fourier Transform A.8.2 Fast Fourier Transform
B. Multivariate Differentiation and Optimization B.1 Multivariate
Differentiation B.1.1 Taylor Expansion B.1.2 Chain Rule B.2 Optimization
Theory B.2.1 Convexity and Optimization B.2.2 Lagrangian Method B.2.3
Duality B.3 Numerical Root-Finding and Minimization B.3.1 Newton-Like
Methods B.3.2 Quasi-Newton Methods B.3.3 Normal Approximation Method B.3.4
Nonlinear Least Squares B.4 Constrained Minimization via Penalty Functions
C. Probability and Statistics C.1 Random Experiments and Probability Spaces
C.2 Random Variables and Probability Distributions C.3 Expectation C.4
Joint Distributions C.5 Conditioning and Independence C.5.1 Conditional
Probability C.5.2 Independence C.5.3 Expectation and Covariance C.5.4
Conditional Density and Conditional Expectation C.6 Functions of Random
Variables C.7 Multivariate Normal Distribution C.8 Convergence of Random
Variables C.9 Law of Large Numbers and Central Limit Theorem C.10 Markov
Chains C.11 Statistics C.12 Estimation C.12.1 Method of Moments C.12.2
Maximum Likelihood Method C.13 Confidence Intervals C.14 Hypothesis Testing
D. Python Primer D.1 Getting Started D.2 Python Objects D.3 Types and
Operators D.4 Functions and Methods D.5 Modules D.6 Flow Control D.7
Iteration D.8 Classes D.9 Files D.10 NumPy D.10.1 Creating and Shaping
Arrays D.10.2 Slicing D.10.3 Array Operations D.10.4 Random Numbers D.11
Matplotlib D.11.1 Creating a Basic Plot D.12 Pandas D.12.1 Series and
DataFrame D.12.2 Manipulating Data Frames D.12.3 Extracting Information
D.12.4 Plotting D.13 Scikit-learn D.13.1 Partitioning the Data D.13.2
Standardization D.13.3 Fitting and Prediction D.13.4 Testing the Model D.14
System Calls, URL Access, and Speed-Up Bibliography Index
Preface Notation 1. Importing, Summarizing, and Visualizing Data 1.1
Introduction 1.2 Structuring Features According to Type 1.3 Summary Tables
1.4 Summary Statistics 1.5 Visualizing Data 1.5.1 Plotting Qualitative
Variables 1.5.2 Plotting Quantitative Variables 1.5.3 Data Visualization in
a Bivariate Setting Exercises 2. Statistical Learning 2.1 Introduction 2.2
Supervised and Unsupervised Learning 2.3 Training and Test Loss 2.4
Tradeoffs in Statistical Learning 2.5 Estimating Risk 2.5.1 In-Sample Risk
2.5.2 Cross-Validation 2.6 Modeling Data 2.7 Multivariate Normal Models 2.8
Normal Linear Models 2.9 Bayesian Learning Exercises 3. Monte Carlo Methods
3.1 Introduction 3.2 Monte Carlo Sampling 3.2.1 Generating Random Numbers
3.2.2 Simulating Random Variables 3.2.3 Simulating Random Vectors and
Processes 3.2.4 Resampling 3.2.5 Markov Chain Monte Carlo 3.3 Monte Carlo
Estimation 3.3.1 Crude Monte Carlo 3.3.2 Bootstrap Method 3.3.3 Variance
Reduction 3.4 Monte Carlo for Optimization 3.4.1 Simulated Annealing 3.4.2
Cross-Entropy Method 3.4.3 Splitting for Optimization3.4.4 Noisy
Optimization Exercises 4. Unsupervised Learning 4.1 Introduction 4.2 Risk
and Loss in Unsupervised Learning 4.3 Expectation-Maximization (EM)
Algorithm 4.4 Empirical Distribution and Density Estimation 4.5 Clustering
via Mixture Models 4.5.1 Mixture Models 4.5.2 EM Algorithm for Mixture
Models 4.6 Clustering via Vector Quantization 4.6.1 K-Means 4.6.2
Clustering via Continuous Multiextremal Optimization 4.7 Hierarchical
Clustering 4.8 Principal Component Analysis (PCA) 4.8.1 Motivation:
Principal Axes of an Ellipsoid 4.8.2 PCA and Singular Value Decomposition
(SVD) Exercises 5. Regression 5.1 Introduction 5.2 Linear Regression 5.3
Analysis via Linear Models 5.3.1 Parameter Estimation 5.3.2 Model Selection
and Prediction 5.3.3 Cross-Validation and Predictive Residual Sum of
Squares 5.3.4 In-Sample Risk and Akaike Information Criterion 5.3.5
Categorical Features 5.3.6 Nested Models 5.3.7 Coefficient of Determination
5.4 Inference for Normal Linear Models 5.4.1 Comparing Two Normal Linear
Models 5.4.2 Confidence and Prediction Intervals 5.5 Nonlinear Regression
Models 5.6 Linear Models in Python 5.6.1 Modeling 5.6.2 Analysis 5.6.3
Analysis of Variance (ANOVA) 5.6.4 Confidence and Prediction Intervals
5.6.5 Model Validation 5.6.6 Variable Selection 5.7 Generalized Linear
Models Exercises 6. Regularization and Kernel Methods 6.1 Introduction 6.2
Regularization 6.3 Reproducing Kernel Hilbert Spaces 6.4 Construction of
Reproducing Kernels 6.4.1 Reproducing Kernels via Feature Mapping 6.4.2
Kernels from Characteristic Functions 6.4.3 Reproducing Kernels Using
Orthonormal Features 6.4.4 Kernels from Kernels 6.5 Representer Theorem 6.6
Smoothing Cubic Splines 6.7 Gaussian Process Regression 6.8 Kernel PCA
Exercises 7. Classification 7.1 Introduction 7.2 Classification Metrics 7.3
Classification via Bayes' Rule 7.4 Linear and Quadratic Discriminant
Analysis 7.5 Logistic Regression and Softmax Classification 7.6 K-nearest
Neighbors Classification 7.7 Support Vector Machine 7.8 Classification with
Scikit-Learn Exercises 8. Decision Trees and Ensemble Methods 8.1
Introduction 8.2 Top-Down Construction of Decision Trees 8.2.1 Regional
Prediction Functions 8.2.2 Splitting Rules 8.2.3 Termination Criterion
8.2.4 Basic Implementation 8.3 Additional Considerations 8.3.1 Binary
Versus Non-Binary Trees 8.3.2 Data Preprocessing 8.3.3 Alternative
Splitting Rules 8.3.4 Categorical Variables 8.3.5 Missing Values 8.4
Controlling the Tree Shape 8.4.1 Cost-Complexity Pruning 8.4.2 Advantages
and Limitations of Decision Trees 8.5 Bootstrap Aggregation 8.6 Random
Forests 8.7 Boosting Exercises 9. Deep Learning 9.1 Introduction 9.2
Feed-Forward Neural Networks 9.3 Back-Propagation 9.4 Methods for Training
9.4.1 Steepest Descent 9.4.2 Levenberg-Marquardt Method 9.4.3
Limited-Memory BFGS Method 9.4.4 Adaptive Gradient Methods 9.5 Examples in
Python 9.5.1 Simple Polynomial Regression 9.5.2 Image Classification
Exercises A. Linear Algebra and Functional Analysis A.1 Vector Spaces,
Bases, and Matrices A.2 Inner Product A.3 Complex Vectors and Matrices A.4
Orthogonal Projections A.5 Eigenvalues and Eigenvectors A.5.1 Left- and
Right-Eigenvectors A.6 Matrix Decompositions A.6.1 (P)LU Decomposition
A.6.2 Woodbury Identity A.6.3 Cholesky Decomposition A.6.4 QR Decomposition
and the Gram-Schmidt Procedure A.6.5 Singular Value Decomposition A.6.6
Solving Structured Matrix Equations A.7 Functional Analysis A.8 Fourier
Transforms A.8.1 Discrete Fourier Transform A.8.2 Fast Fourier Transform
B. Multivariate Differentiation and Optimization B.1 Multivariate
Differentiation B.1.1 Taylor Expansion B.1.2 Chain Rule B.2 Optimization
Theory B.2.1 Convexity and Optimization B.2.2 Lagrangian Method B.2.3
Duality B.3 Numerical Root-Finding and Minimization B.3.1 Newton-Like
Methods B.3.2 Quasi-Newton Methods B.3.3 Normal Approximation Method B.3.4
Nonlinear Least Squares B.4 Constrained Minimization via Penalty Functions
C. Probability and Statistics C.1 Random Experiments and Probability Spaces
C.2 Random Variables and Probability Distributions C.3 Expectation C.4
Joint Distributions C.5 Conditioning and Independence C.5.1 Conditional
Probability C.5.2 Independence C.5.3 Expectation and Covariance C.5.4
Conditional Density and Conditional Expectation C.6 Functions of Random
Variables C.7 Multivariate Normal Distribution C.8 Convergence of Random
Variables C.9 Law of Large Numbers and Central Limit Theorem C.10 Markov
Chains C.11 Statistics C.12 Estimation C.12.1 Method of Moments C.12.2
Maximum Likelihood Method C.13 Confidence Intervals C.14 Hypothesis Testing
D. Python Primer D.1 Getting Started D.2 Python Objects D.3 Types and
Operators D.4 Functions and Methods D.5 Modules D.6 Flow Control D.7
Iteration D.8 Classes D.9 Files D.10 NumPy D.10.1 Creating and Shaping
Arrays D.10.2 Slicing D.10.3 Array Operations D.10.4 Random Numbers D.11
Matplotlib D.11.1 Creating a Basic Plot D.12 Pandas D.12.1 Series and
DataFrame D.12.2 Manipulating Data Frames D.12.3 Extracting Information
D.12.4 Plotting D.13 Scikit-learn D.13.1 Partitioning the Data D.13.2
Standardization D.13.3 Fitting and Prediction D.13.4 Testing the Model D.14
System Calls, URL Access, and Speed-Up Bibliography Index
Introduction 1.2 Structuring Features According to Type 1.3 Summary Tables
1.4 Summary Statistics 1.5 Visualizing Data 1.5.1 Plotting Qualitative
Variables 1.5.2 Plotting Quantitative Variables 1.5.3 Data Visualization in
a Bivariate Setting Exercises 2. Statistical Learning 2.1 Introduction 2.2
Supervised and Unsupervised Learning 2.3 Training and Test Loss 2.4
Tradeoffs in Statistical Learning 2.5 Estimating Risk 2.5.1 In-Sample Risk
2.5.2 Cross-Validation 2.6 Modeling Data 2.7 Multivariate Normal Models 2.8
Normal Linear Models 2.9 Bayesian Learning Exercises 3. Monte Carlo Methods
3.1 Introduction 3.2 Monte Carlo Sampling 3.2.1 Generating Random Numbers
3.2.2 Simulating Random Variables 3.2.3 Simulating Random Vectors and
Processes 3.2.4 Resampling 3.2.5 Markov Chain Monte Carlo 3.3 Monte Carlo
Estimation 3.3.1 Crude Monte Carlo 3.3.2 Bootstrap Method 3.3.3 Variance
Reduction 3.4 Monte Carlo for Optimization 3.4.1 Simulated Annealing 3.4.2
Cross-Entropy Method 3.4.3 Splitting for Optimization3.4.4 Noisy
Optimization Exercises 4. Unsupervised Learning 4.1 Introduction 4.2 Risk
and Loss in Unsupervised Learning 4.3 Expectation-Maximization (EM)
Algorithm 4.4 Empirical Distribution and Density Estimation 4.5 Clustering
via Mixture Models 4.5.1 Mixture Models 4.5.2 EM Algorithm for Mixture
Models 4.6 Clustering via Vector Quantization 4.6.1 K-Means 4.6.2
Clustering via Continuous Multiextremal Optimization 4.7 Hierarchical
Clustering 4.8 Principal Component Analysis (PCA) 4.8.1 Motivation:
Principal Axes of an Ellipsoid 4.8.2 PCA and Singular Value Decomposition
(SVD) Exercises 5. Regression 5.1 Introduction 5.2 Linear Regression 5.3
Analysis via Linear Models 5.3.1 Parameter Estimation 5.3.2 Model Selection
and Prediction 5.3.3 Cross-Validation and Predictive Residual Sum of
Squares 5.3.4 In-Sample Risk and Akaike Information Criterion 5.3.5
Categorical Features 5.3.6 Nested Models 5.3.7 Coefficient of Determination
5.4 Inference for Normal Linear Models 5.4.1 Comparing Two Normal Linear
Models 5.4.2 Confidence and Prediction Intervals 5.5 Nonlinear Regression
Models 5.6 Linear Models in Python 5.6.1 Modeling 5.6.2 Analysis 5.6.3
Analysis of Variance (ANOVA) 5.6.4 Confidence and Prediction Intervals
5.6.5 Model Validation 5.6.6 Variable Selection 5.7 Generalized Linear
Models Exercises 6. Regularization and Kernel Methods 6.1 Introduction 6.2
Regularization 6.3 Reproducing Kernel Hilbert Spaces 6.4 Construction of
Reproducing Kernels 6.4.1 Reproducing Kernels via Feature Mapping 6.4.2
Kernels from Characteristic Functions 6.4.3 Reproducing Kernels Using
Orthonormal Features 6.4.4 Kernels from Kernels 6.5 Representer Theorem 6.6
Smoothing Cubic Splines 6.7 Gaussian Process Regression 6.8 Kernel PCA
Exercises 7. Classification 7.1 Introduction 7.2 Classification Metrics 7.3
Classification via Bayes' Rule 7.4 Linear and Quadratic Discriminant
Analysis 7.5 Logistic Regression and Softmax Classification 7.6 K-nearest
Neighbors Classification 7.7 Support Vector Machine 7.8 Classification with
Scikit-Learn Exercises 8. Decision Trees and Ensemble Methods 8.1
Introduction 8.2 Top-Down Construction of Decision Trees 8.2.1 Regional
Prediction Functions 8.2.2 Splitting Rules 8.2.3 Termination Criterion
8.2.4 Basic Implementation 8.3 Additional Considerations 8.3.1 Binary
Versus Non-Binary Trees 8.3.2 Data Preprocessing 8.3.3 Alternative
Splitting Rules 8.3.4 Categorical Variables 8.3.5 Missing Values 8.4
Controlling the Tree Shape 8.4.1 Cost-Complexity Pruning 8.4.2 Advantages
and Limitations of Decision Trees 8.5 Bootstrap Aggregation 8.6 Random
Forests 8.7 Boosting Exercises 9. Deep Learning 9.1 Introduction 9.2
Feed-Forward Neural Networks 9.3 Back-Propagation 9.4 Methods for Training
9.4.1 Steepest Descent 9.4.2 Levenberg-Marquardt Method 9.4.3
Limited-Memory BFGS Method 9.4.4 Adaptive Gradient Methods 9.5 Examples in
Python 9.5.1 Simple Polynomial Regression 9.5.2 Image Classification
Exercises A. Linear Algebra and Functional Analysis A.1 Vector Spaces,
Bases, and Matrices A.2 Inner Product A.3 Complex Vectors and Matrices A.4
Orthogonal Projections A.5 Eigenvalues and Eigenvectors A.5.1 Left- and
Right-Eigenvectors A.6 Matrix Decompositions A.6.1 (P)LU Decomposition
A.6.2 Woodbury Identity A.6.3 Cholesky Decomposition A.6.4 QR Decomposition
and the Gram-Schmidt Procedure A.6.5 Singular Value Decomposition A.6.6
Solving Structured Matrix Equations A.7 Functional Analysis A.8 Fourier
Transforms A.8.1 Discrete Fourier Transform A.8.2 Fast Fourier Transform
B. Multivariate Differentiation and Optimization B.1 Multivariate
Differentiation B.1.1 Taylor Expansion B.1.2 Chain Rule B.2 Optimization
Theory B.2.1 Convexity and Optimization B.2.2 Lagrangian Method B.2.3
Duality B.3 Numerical Root-Finding and Minimization B.3.1 Newton-Like
Methods B.3.2 Quasi-Newton Methods B.3.3 Normal Approximation Method B.3.4
Nonlinear Least Squares B.4 Constrained Minimization via Penalty Functions
C. Probability and Statistics C.1 Random Experiments and Probability Spaces
C.2 Random Variables and Probability Distributions C.3 Expectation C.4
Joint Distributions C.5 Conditioning and Independence C.5.1 Conditional
Probability C.5.2 Independence C.5.3 Expectation and Covariance C.5.4
Conditional Density and Conditional Expectation C.6 Functions of Random
Variables C.7 Multivariate Normal Distribution C.8 Convergence of Random
Variables C.9 Law of Large Numbers and Central Limit Theorem C.10 Markov
Chains C.11 Statistics C.12 Estimation C.12.1 Method of Moments C.12.2
Maximum Likelihood Method C.13 Confidence Intervals C.14 Hypothesis Testing
D. Python Primer D.1 Getting Started D.2 Python Objects D.3 Types and
Operators D.4 Functions and Methods D.5 Modules D.6 Flow Control D.7
Iteration D.8 Classes D.9 Files D.10 NumPy D.10.1 Creating and Shaping
Arrays D.10.2 Slicing D.10.3 Array Operations D.10.4 Random Numbers D.11
Matplotlib D.11.1 Creating a Basic Plot D.12 Pandas D.12.1 Series and
DataFrame D.12.2 Manipulating Data Frames D.12.3 Extracting Information
D.12.4 Plotting D.13 Scikit-learn D.13.1 Partitioning the Data D.13.2
Standardization D.13.3 Fitting and Prediction D.13.4 Testing the Model D.14
System Calls, URL Access, and Speed-Up Bibliography Index