53,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
payback
27 °P sammeln
  • Broschiertes Buch

Build, train, and deploy large machine learning models at scale in various domains such as computational fluid dynamics, genomics, autonomous vehicles, and numerical optimization using Amazon SageMaker Key Features:Understand the need for high-performance computing (HPC) Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker Learn best practices and architectures for implementing ML at scale using HPC Book Description: Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging…mehr

Produktbeschreibung
Build, train, and deploy large machine learning models at scale in various domains such as computational fluid dynamics, genomics, autonomous vehicles, and numerical optimization using Amazon SageMaker Key Features:Understand the need for high-performance computing (HPC) Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker Learn best practices and architectures for implementing ML at scale using HPC Book Description: Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles. This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you'll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases. By the end of this book, you'll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle. What You Will Learn:Explore data management, storage, and fast networking for HPC applications Focus on the analysis and visualization of a large volume of data using Spark Train visual transformer models using SageMaker distributed training Deploy and manage ML models at scale on the cloud and at the edge Get to grips with performance optimization of ML models for low latency workloads Apply HPC to industry domains such as CFD, genomics, AV, and optimization Who this book is for: The book begins with HPC concepts, however, it expects you to have prior machine learning knowledge. This book is for ML engineers and data scientists interested in learning advanced topics on using large datasets for training large models using distributed training concepts on AWS, deploying models at scale, and performance optimization for low latency use cases. Practitioners in fields such as numerical optimization, computation fluid dynamics, autonomous vehicles, and genomics, who require HPC for applying ML models to applications at scale will also find the book useful.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Mani Khanuja is a seasoned IT professional with over 17 years of software engineering experience. She has successfully led machine learning and artificial intelligence projects in various domains, such as forecasting, computer vision, and natural language processing. At AWS, she helps customers to build, train, and deploy large machine learning models at scale. She also specializes in data preparation, distributed model training, performance optimization, machine learning at the edge, and automating the complete machine learning life cycle to build repeatable and scalable applications.