Floris Ernst, Achim Schweikard
Fundamentals of Machine Learning (eBook, PDF)
Support Vector Machines Made Easy
Statt 25,90 €**
24,99 €
**Preis der gedruckten Ausgabe (Broschiertes Buch)
inkl. MwSt. und vom Verlag festgesetzt.
Floris Ernst, Achim Schweikard
Fundamentals of Machine Learning (eBook, PDF)
Support Vector Machines Made Easy
- Format: PDF
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks. Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of…mehr
- Geräte: PC
- ohne Kopierschutz
- eBook Hilfe
- Größe: 2.78MB
- Upload möglich
Andere Kunden interessierten sich auch für
- Machine Learning Paradigms (eBook, PDF)149,79 €
- Emerging Paradigms in Machine Learning (eBook, PDF)71,95 €
- Arthur KordonApplying Computational Intelligence (eBook, PDF)71,95 €
- Digitales Ökosystem für Innovationen in der Landwirtschaft (eBook, PDF)99,99 €
- Neha SharmaAuf dem Weg zu Netto-Null-Zielen (eBook, PDF)89,99 €
- Matthias HaunCognitive Computing (eBook, PDF)89,91 €
- Artificial Intelligence in Decision Support Systems for Diagnosis in Medical Imaging (eBook, PDF)171,19 €
-
-
-
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks. Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of computer science, engineering and natural sciences, especially in the fields of robotics, artificial intelligence and mathematics.
Produktdetails
- Produktdetails
- Verlag: UTB GmbH
- Seitenzahl: 157
- Erscheinungstermin: 13. Juli 2020
- Englisch
- ISBN-13: 9783838552514
- Artikelnr.: 71188944
- Verlag: UTB GmbH
- Seitenzahl: 157
- Erscheinungstermin: 13. Juli 2020
- Englisch
- ISBN-13: 9783838552514
- Artikelnr.: 71188944
Prof. Dr. Floris Ernst lehrt KI (Künstliche Intelligenz) und Robotik an der Universität Lübeck.
Contents Preface 1 Symbolic Classification and Nearest Neighbour Classification 1.1 Symbolic Classification 1.2 Nearest Neighbour Classification 2 Separating Planes and Linear Programming 2.1 Finding a Separating Hyperplane 2.2 Testing for feasibility of linear constraints 2.3 Linear Programming MATLAB example 2.4 Conclusion 3 Separating Margins and Quadratic Programming 3.1 Quadratic Programming 3.2 Maximum Margin Separator Planes 3.3 Slack Variables 4 Dualization and Support Vectors 4.1 Duals of Linear Programs 4.2 Duals of Quadratic Programs 4.3 Support Vectors 5 Lagrange Multipliers and Duality 5.1 Multidimensional functions 5.2 Support Vector Expansion 5.3 Support Vector Expansion with Slack Variables 6 Kernel Functions 6.1 Feature Spaces 6.2 Feature Spaces and Quadratic Programming 6.3 Kernel Matrix and Mercer’s Theorem 6.4 Proof of Mercer’s Theorem Step 1 – Definitions and Prerequisites Step 2 – Designing the right Hilbert Space Step 3 – The reproducing property 7 The SMO Algorithm 7.1 Overview and Principles 7.2 Optimisation Step 7.3 Simplified SMO 8 Regression 8.1 Slack Variables 8.2 Duality, Kernels and Regression 8.3 Deriving the Dual form of the QP for Regression 9 Perceptrons, Neural Networks and Genetic Algorithms 9.1 Perceptrons Perceptron-Algorithm Perceptron-Lemma and Convergence Perceptrons and Linear Feasibility Testing 9.2 Neural Networks Forward Propagation Training and Error Backpropagation 9.3 Genetic Algorithms 9.4 Conclusion 10 Bayesian Regression 10.1 Bayesian Learning 10.2 Probabilistic Linear Regression 10.3 Gaussian Process Models 10.4 GP model with measurement noise Optimization of hyperparameters Covariance functions 10.5 Multi-Task Gaussian Process (MTGP) Models 11 Bayesian Networks Propagation of probabilities in causal networks Appendix – Linear Programming A.1 Solving LP0 problems A.2 Schematic representation of the iteration steps A.3 Transition from LP0 to LP A.4 Computing time and complexity issues References Index
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex
Contents Preface 1 Symbolic Classification and Nearest Neighbour Classification 1.1 Symbolic Classification 1.2 Nearest Neighbour Classification 2 Separating Planes and Linear Programming 2.1 Finding a Separating Hyperplane 2.2 Testing for feasibility of linear constraints 2.3 Linear Programming MATLAB example 2.4 Conclusion 3 Separating Margins and Quadratic Programming 3.1 Quadratic Programming 3.2 Maximum Margin Separator Planes 3.3 Slack Variables 4 Dualization and Support Vectors 4.1 Duals of Linear Programs 4.2 Duals of Quadratic Programs 4.3 Support Vectors 5 Lagrange Multipliers and Duality 5.1 Multidimensional functions 5.2 Support Vector Expansion 5.3 Support Vector Expansion with Slack Variables 6 Kernel Functions 6.1 Feature Spaces 6.2 Feature Spaces and Quadratic Programming 6.3 Kernel Matrix and Mercer’s Theorem 6.4 Proof of Mercer’s Theorem Step 1 – Definitions and Prerequisites Step 2 – Designing the right Hilbert Space Step 3 – The reproducing property 7 The SMO Algorithm 7.1 Overview and Principles 7.2 Optimisation Step 7.3 Simplified SMO 8 Regression 8.1 Slack Variables 8.2 Duality, Kernels and Regression 8.3 Deriving the Dual form of the QP for Regression 9 Perceptrons, Neural Networks and Genetic Algorithms 9.1 Perceptrons Perceptron-Algorithm Perceptron-Lemma and Convergence Perceptrons and Linear Feasibility Testing 9.2 Neural Networks Forward Propagation Training and Error Backpropagation 9.3 Genetic Algorithms 9.4 Conclusion 10 Bayesian Regression 10.1 Bayesian Learning 10.2 Probabilistic Linear Regression 10.3 Gaussian Process Models 10.4 GP model with measurement noise Optimization of hyperparameters Covariance functions 10.5 Multi-Task Gaussian Process (MTGP) Models 11 Bayesian Networks Propagation of probabilities in causal networks Appendix – Linear Programming A.1 Solving LP0 problems A.2 Schematic representation of the iteration steps A.3 Transition from LP0 to LP A.4 Computing time and complexity issues References Index
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex