52,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in über 4 Wochen
  • Broschiertes Buch

Design scalable big data solutions using Hadoop, Spark, and AWS cloud native services Key Features:Build data pipelines that require distributed processing capabilities on a large volume of data Discover the security features of EMR such as data protection and granular permission management Explore best practices and optimization techniques for building data analytics solutions in Amazon EMR Book Description: Amazon EMR, formerly Amazon Elastic MapReduce, provides a managed Hadoop cluster in Amazon Web Services (AWS) that you can use to implement batch or streaming data pipelines. By gaining…mehr

Produktbeschreibung
Design scalable big data solutions using Hadoop, Spark, and AWS cloud native services Key Features:Build data pipelines that require distributed processing capabilities on a large volume of data Discover the security features of EMR such as data protection and granular permission management Explore best practices and optimization techniques for building data analytics solutions in Amazon EMR Book Description: Amazon EMR, formerly Amazon Elastic MapReduce, provides a managed Hadoop cluster in Amazon Web Services (AWS) that you can use to implement batch or streaming data pipelines. By gaining expertise in Amazon EMR, you can design and implement data analytics pipelines with persistent or transient EMR clusters in AWS. This book is a practical guide to Amazon EMR for building data pipelines. You'll start by understanding the Amazon EMR architecture, cluster nodes, features, and deployment options, along with their pricing. Next, the book covers the various big data applications that EMR supports. You'll then focus on the advanced configuration of EMR applications, hardware, networking, security, troubleshooting, logging, and the different SDKs and APIs it provides. Later chapters will show you how to implement common Amazon EMR use cases, including batch ETL with Spark, real-time streaming with Spark Streaming, and handling UPSERT in S3 Data Lake with Apache Hudi. Finally, you'll orchestrate your EMR jobs and strategize on-premises Hadoop cluster migration to EMR. In addition to this, you'll explore best practices and cost optimization techniques while implementing your data analytics pipeline in EMR. By the end of this book, you'll be able to build and deploy Hadoop- or Spark-based apps on Amazon EMR and also migrate your existing on-premises Hadoop workloads to AWS. What You Will Learn:Explore Amazon EMR features, architecture, Hadoop interfaces, and EMR Studio Configure, deploy, and orchestrate Hadoop or Spark jobs in production Implement the security, data governance, and monitoring capabilities of EMR Build applications for batch and real-time streaming data analytics solutions Perform interactive development with a persistent EMR cluster and Notebook Orchestrate an EMR Spark job using AWS Step Functions and Apache Airflow Who this book is for: This book is for data engineers, data analysts, data scientists, and solution architects who are interested in building data analytics solutions with the Hadoop ecosystem services and Amazon EMR. Prior experience in either Python programming, Scala, or the Java programming language and a basic understanding of Hadoop and AWS will help you make the most out of this book.
Autorenporträt
Sakti Mishra is an engineer, architect, author, and technology leader with over 16 years of experience in the IT industry. He is currently working as a senior data lab architect at Amazon Web Services (AWS).He is passionate about technologies and has expertise in big data, analytics, machine learning, artificial intelligence, graph networks, web/mobile applications, and cloud technologies such as AWS and Google Cloud Platform.Sakti has a bachelor's degree in engineering and a master's degree in business administration. He holds several certifications in Hadoop, Spark, AWS, and Google Cloud. He is also an author of multiple technology blogs, workshops, white papers and is a public speaker who represents AWS in various domains and events.