- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks.Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of…mehr
Andere Kunden interessierten sich auch für
- Kashmir HillYour Face Belongs to Us17,99 €
- Rachel BotsmanWho Can You Trust?15,99 €
- Joachim ErvenMathematik für angewandte Wissenschaften39,95 €
- Nathan SmithThe Future15,99 €
- Towards the Integration of IoT, Cloud and Big Data125,99 €
- James P. RoseMachine Learning for Smart Homes24,99 €
- J Joshua ThomasSmart Cities and Machine Learning in Urban Health173,99 €
-
-
-
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks.Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of computer science, engineering and natural sciences, especially in the fields of robotics, artificial intelligence and mathematics.
Produktdetails
- Produktdetails
- UTB Uni-Taschenbücher 5251
- Verlag: UTB
- Artikelnr. des Verlages: 5251
- Seitenzahl: 154
- Erscheinungstermin: 13. Juli 2020
- Englisch
- Abmessung: 264mm x 195mm x 15mm
- Gewicht: 412g
- ISBN-13: 9783825252519
- ISBN-10: 3825252515
- Artikelnr.: 56300543
- UTB Uni-Taschenbücher 5251
- Verlag: UTB
- Artikelnr. des Verlages: 5251
- Seitenzahl: 154
- Erscheinungstermin: 13. Juli 2020
- Englisch
- Abmessung: 264mm x 195mm x 15mm
- Gewicht: 412g
- ISBN-13: 9783825252519
- ISBN-10: 3825252515
- Artikelnr.: 56300543
Prof. Dr. Floris Ernst lehrt KI (Künstliche Intelligenz) und Robotik an der Universität Lübeck.
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex
Contents Preface 1 Symbolic Classification and Nearest Neighbour Classification 1.1 Symbolic Classification 1.2 Nearest Neighbour Classification 2 Separating Planes and Linear Programming 2.1 Finding a Separating Hyperplane 2.2 Testing for feasibility of linear constraints 2.3 Linear Programming MATLAB example 2.4 Conclusion 3 Separating Margins and Quadratic Programming 3.1 Quadratic Programming 3.2 Maximum Margin Separator Planes 3.3 Slack Variables 4 Dualization and Support Vectors 4.1 Duals of Linear Programs 4.2 Duals of Quadratic Programs 4.3 Support Vectors 5 Lagrange Multipliers and Duality 5.1 Multidimensional functions 5.2 Support Vector Expansion 5.3 Support Vector Expansion with Slack Variables 6 Kernel Functions 6.1 Feature Spaces 6.2 Feature Spaces and Quadratic Programming 6.3 Kernel Matrix and Mercer’s Theorem 6.4 Proof of Mercer’s Theorem Step 1 – Definitions and Prerequisites Step 2 – Designing the right Hilbert Space Step 3 – The reproducing property 7 The SMO Algorithm 7.1 Overview and Principles 7.2 Optimisation Step 7.3 Simplified SMO 8 Regression 8.1 Slack Variables 8.2 Duality, Kernels and Regression 8.3 Deriving the Dual form of the QP for Regression 9 Perceptrons, Neural Networks and Genetic Algorithms 9.1 Perceptrons Perceptron-Algorithm Perceptron-Lemma and Convergence Perceptrons and Linear Feasibility Testing 9.2 Neural Networks Forward Propagation Training and Error Backpropagation 9.3 Genetic Algorithms 9.4 Conclusion 10 Bayesian Regression 10.1 Bayesian Learning 10.2 Probabilistic Linear Regression 10.3 Gaussian Process Models 10.4 GP model with measurement noise Optimization of hyperparameters Covariance functions 10.5 Multi-Task Gaussian Process (MTGP) Models 11 Bayesian Networks Propagation of probabilities in causal networks Appendix – Linear Programming A.1 Solving LP0 problems A.2 Schematic representation of the iteration steps A.3 Transition from LP0 to LP A.4 Computing time and complexity issues References Index
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex
Contents Preface 1 Symbolic Classification and Nearest Neighbour Classification 1.1 Symbolic Classification 1.2 Nearest Neighbour Classification 2 Separating Planes and Linear Programming 2.1 Finding a Separating Hyperplane 2.2 Testing for feasibility of linear constraints 2.3 Linear Programming MATLAB example 2.4 Conclusion 3 Separating Margins and Quadratic Programming 3.1 Quadratic Programming 3.2 Maximum Margin Separator Planes 3.3 Slack Variables 4 Dualization and Support Vectors 4.1 Duals of Linear Programs 4.2 Duals of Quadratic Programs 4.3 Support Vectors 5 Lagrange Multipliers and Duality 5.1 Multidimensional functions 5.2 Support Vector Expansion 5.3 Support Vector Expansion with Slack Variables 6 Kernel Functions 6.1 Feature Spaces 6.2 Feature Spaces and Quadratic Programming 6.3 Kernel Matrix and Mercer’s Theorem 6.4 Proof of Mercer’s Theorem Step 1 – Definitions and Prerequisites Step 2 – Designing the right Hilbert Space Step 3 – The reproducing property 7 The SMO Algorithm 7.1 Overview and Principles 7.2 Optimisation Step 7.3 Simplified SMO 8 Regression 8.1 Slack Variables 8.2 Duality, Kernels and Regression 8.3 Deriving the Dual form of the QP for Regression 9 Perceptrons, Neural Networks and Genetic Algorithms 9.1 Perceptrons Perceptron-Algorithm Perceptron-Lemma and Convergence Perceptrons and Linear Feasibility Testing 9.2 Neural Networks Forward Propagation Training and Error Backpropagation 9.3 Genetic Algorithms 9.4 Conclusion 10 Bayesian Regression 10.1 Bayesian Learning 10.2 Probabilistic Linear Regression 10.3 Gaussian Process Models 10.4 GP model with measurement noise Optimization of hyperparameters Covariance functions 10.5 Multi-Task Gaussian Process (MTGP) Models 11 Bayesian Networks Propagation of probabilities in causal networks Appendix – Linear Programming A.1 Solving LP0 problems A.2 Schematic representation of the iteration steps A.3 Transition from LP0 to LP A.4 Computing time and complexity issues References Index