This book provides a concise but comprehensive guide to representation, which forms the core of Machine Learning (ML). State-of-the-art practical applications involve a number of challenges for the analysis of high-dimensional data. Unfortunately, many popular ML algorithms fail to perform, in both theory and practice, when they are confronted with the huge size of the underlying data. Solutions to this problem are aptly covered in the book.
In addition, the book covers a wide range of representation techniques that are important for academics and ML practitioners alike, such as Locality Sensitive Hashing (LSH), Distance Metrics and Fractional Norms, Principal Components (PCs), Random Projections and Autoencoders. Several experimental results are provided in the book to demonstrate the discussed techniques’ effectiveness.
In addition, the book covers a wide range of representation techniques that are important for academics and ML practitioners alike, such as Locality Sensitive Hashing (LSH), Distance Metrics and Fractional Norms, Principal Components (PCs), Random Projections and Autoencoders. Several experimental results are provided in the book to demonstrate the discussed techniques’ effectiveness.