38,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this book tackles the key research problems ranging from feature selection to learning with mixed unlabeled data and learning data similarity representation. More specifically, we focus on the problems…mehr

Produktbeschreibung
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this book tackles the key research problems ranging from feature selection to learning with mixed unlabeled data and learning data similarity representation. More specifically, we focus on the problems in three areas: online learning, semi-supervised learning, and multiple kernel learning. The proposed models can be applied in various applications, including marketing analysis, bioinformatics, pattern recognition, etc.
Autorenporträt
Haiqin Yang finished his Ph.D. study in Computer Science and Engineering, the Chinese University of Hong Kong in 2010. His research interests include machine learning, data mining, financial engineering, pattern recognition, etc. He has conducted various research work in these areas and output many publications and patents.