Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks.Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of…mehr
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks.Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of computer science, engineering and natural sciences, especially in the fields of robotics, artificial intelligence and mathematics.
Prof. Dr. Floris Ernst lehrt KI (Künstliche Intelligenz) und Robotik an der Universität Lübeck.
Inhaltsangabe
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Principles7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826