49,95 €
49,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
25 °P sammeln
49,95 €
49,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
25 °P sammeln
Als Download kaufen
49,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
25 °P sammeln
Jetzt verschenken
49,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
25 °P sammeln
  • Format: PDF

This book focuses on deploying, testing, monitoring, and automating ML systems in production at Cloud scale. It covers AWS MLOps services such as Amazon SageMaker, Data Wrangler, and AWS Feature Store, along with best practices for operating ML systems on AWS.
This book explains how to design, develop, and deploy ML workloads at scale using AWS cloud's well-architected Framework. It starts with an introduction to AWS services and MLOps tools, setting up the MLOps environment. It covers operational excellence, including CI/CD pipelines and Infrastructure as code. Security in MLOps, data…mehr

  • Geräte: PC
  • ohne Kopierschutz
  • eBook Hilfe
  • Größe: 16.03MB
Produktbeschreibung
This book focuses on deploying, testing, monitoring, and automating ML systems in production at Cloud scale. It covers AWS MLOps services such as Amazon SageMaker, Data Wrangler, and AWS Feature Store, along with best practices for operating ML systems on AWS.

This book explains how to design, develop, and deploy ML workloads at scale using AWS cloud's well-architected Framework. It starts with an introduction to AWS services and MLOps tools, setting up the MLOps environment. It covers operational excellence, including CI/CD pipelines and Infrastructure as code. Security in MLOps, data privacy, IAM, and reliability with automated testing are discussed. Performance efficiency and cost optimization, like Right-sizing ML resources, are explored. The book concludes with MLOps best practices, MLOps for GenAI, emerging trends, and future developments in MLOps

By the end, readers will learn operating ML workloads on the AWS cloud. This book suits software developers, ML engineers, DevOps engineers, architects, and team leaders aspiring to be MLOps professionals on AWS.

What you will learn:

¿ Create repeatable training workflows to accelerate model development

¿ Catalog ML artifacts centrally for model reproducibility and governance

¿ Integrate ML workflows with CI/CD pipelines for faster time to production

¿ Continuously monitor data and models in production to maintain quality

¿ Optimize model deployment for performance and cost


Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Neel Sendas is a Principal Technical Account Manager at Amazon Web Services (AWS). In this role, he serves as the AWS Cloud Operations lead for some of the largest enterprises that utilize AWS services. Drawing from his expertise in cloud operations, in this book, Neel presents solutions to common challenges related to ML Cloud Governance, Cloud Finance, and Cloud Operational Resilience & Management at scale. Neel also plays a crucial role as part of the core team of Machine Learning Technical Field Community leaders at AWS, where he contributes to shaping the roadmap of AWS Artificial Intelligence and Machine Learning (AI/ML) Services. Neel is based in the state of Georgia, United States.

Deepali Rajale is a former AWS ML Specialist Technical Account Manager, with extensive experience supporting enterprise customers in implementing MLOps best practices across various industries. She is also the founder of Karini AI, a company dedicated to democratizing generative AI for businesses. She enjoys blogging about ML and Generative AI and coaching customers to optimize their AI/ML workloads for operational efficiency and cost optimization. In her spare time, she enjoys traveling, seeking new experiences, and keeping up with the latest technology trends.