Alle Infos zum eBook verschenken
- Format: ePub
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Hier können Sie sich einloggen
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
A unified, coherent treatment of current classifier ensemble methods, from fundamentals of pattern recognition to ensemble feature selection, now in its second edition The art and science of combining pattern classifiers has flourished into a prolific discipline since the first edition of Combining Pattern Classifiers was published in 2004. Dr. Kuncheva has plucked from the rich landscape of recent classifier ensemble literature the topics, methods, and algorithms that will guide the reader toward a deeper understanding of the fundamentals, design, and applications of classifier ensemble…mehr
- Geräte: eReader
- mit Kopierschutz
- eBook Hilfe
- Größe: 14.91MB
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
- Produktdetails
- Verlag: Jossey-Bass
- Seitenzahl: 384
- Erscheinungstermin: 13. August 2014
- Englisch
- ISBN-13: 9781118914540
- Artikelnr.: 41431283
- Verlag: Jossey-Bass
- Seitenzahl: 384
- Erscheinungstermin: 13. August 2014
- Englisch
- ISBN-13: 9781118914540
- Artikelnr.: 41431283
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Independent Errors 262 8.3.4 Independence is Not the Best Scenario 265 8.3.5 Diversity and Ensemble Margins 267 8.4 Using Diversity 270 8.4.1 Diversity for Finding Bounds and Theoretical Relationships 270 8.4.2 Kappa-error Diagrams and Ensemble Maps 271 8.4.3 Overproduce and Select 275 8.5 Conclusions: Diversity of Diversity 279 Appendix 280 8.A.1 Derivation of Diversity Measures for Oracle Outputs 280 8.A.1.1 Correlation
280 8.A.1.2 Interrater Agreement
281 8.A.2 Diversity Measure Equivalence 282 8.A.3 Independent Outputs
Independent Errors 284 8.A.4 A Bound on the Kappa-Error Diagram 286 8.A.5 Calculation of the Pareto Frontier 287 9 Ensemble Feature Selection 290 9.1 Preliminaries 290 9.1.1 Right and Wrong Protocols 290 9.1.2 Ensemble Feature Selection Approaches 294 9.1.3 Natural Grouping 294 9.2 Ranking by Decision Tree Ensembles 295 9.2.1 Simple Count and Split Criterion 295 9.2.2 Permuted Features or the "Noised-up" Method 297 9.3 Ensembles of Rankers 299 9.3.1 The Approach 299 9.3.2 Ranking Methods (Criteria) 300 9.4 Random Feature Selection for the Ensemble 305 9.4.1 Random Subspace Revisited 305 9.4.2 Usability, Coverage, and Feature Diversity 306 9.4.3 Genetic Algorithms 312 9.5 Nonrandom Selection 315 9.5.1 The "Favorite Class" Model 315 9.5.2 The Iterative Model 315 9.5.3 The Incremental Model 316 9.6 A Stability Index 317 9.6.1 Consistency Between a Pair of Subsets 317 9.6.2 A Stability Index for K Sequences 319 9.6.3 An Example of Applying the Stability Index 320 Appendix 322 9.A.1 MATLAB Code for the Numerical Example of Ensemble Ranking 322 9.A.2 MATLAB GA Nuggets 322 9.A.3 MATLAB Code for the Stability Index 324 10 A Final Thought 326 References 327 Index 353
Independent Errors 262 8.3.4 Independence is Not the Best Scenario 265 8.3.5 Diversity and Ensemble Margins 267 8.4 Using Diversity 270 8.4.1 Diversity for Finding Bounds and Theoretical Relationships 270 8.4.2 Kappa-error Diagrams and Ensemble Maps 271 8.4.3 Overproduce and Select 275 8.5 Conclusions: Diversity of Diversity 279 Appendix 280 8.A.1 Derivation of Diversity Measures for Oracle Outputs 280 8.A.1.1 Correlation
280 8.A.1.2 Interrater Agreement
281 8.A.2 Diversity Measure Equivalence 282 8.A.3 Independent Outputs
Independent Errors 284 8.A.4 A Bound on the Kappa-Error Diagram 286 8.A.5 Calculation of the Pareto Frontier 287 9 Ensemble Feature Selection 290 9.1 Preliminaries 290 9.1.1 Right and Wrong Protocols 290 9.1.2 Ensemble Feature Selection Approaches 294 9.1.3 Natural Grouping 294 9.2 Ranking by Decision Tree Ensembles 295 9.2.1 Simple Count and Split Criterion 295 9.2.2 Permuted Features or the "Noised-up" Method 297 9.3 Ensembles of Rankers 299 9.3.1 The Approach 299 9.3.2 Ranking Methods (Criteria) 300 9.4 Random Feature Selection for the Ensemble 305 9.4.1 Random Subspace Revisited 305 9.4.2 Usability, Coverage, and Feature Diversity 306 9.4.3 Genetic Algorithms 312 9.5 Nonrandom Selection 315 9.5.1 The "Favorite Class" Model 315 9.5.2 The Iterative Model 315 9.5.3 The Incremental Model 316 9.6 A Stability Index 317 9.6.1 Consistency Between a Pair of Subsets 317 9.6.2 A Stability Index for K Sequences 319 9.6.3 An Example of Applying the Stability Index 320 Appendix 322 9.A.1 MATLAB Code for the Numerical Example of Ensemble Ranking 322 9.A.2 MATLAB GA Nuggets 322 9.A.3 MATLAB Code for the Stability Index 324 10 A Final Thought 326 References 327 Index 353