24,95 €
24,95 €
inkl. MwSt.
Sofort per Download lieferbar
24,95 €
24,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
Als Download kaufen
24,95 €
inkl. MwSt.
Sofort per Download lieferbar
Jetzt verschenken
24,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
  • Format: PDF

Explore architectural approaches to building Data Lakes that ingest, index, manage, and analyze massive amounts of data using Big Data technologiesAbout This BookComprehend the intricacies of architecting a Data Lake and build a data strategy around your current data architectureEfficiently manage vast amounts of data and deliver it to multiple applications and systems with a high degree of performance and scalabilityPacked with industry best practices and use-case scenarios to get you up-and-runningWho This Book Is ForThis book is for architects and senior managers who are responsible for…mehr

Produktbeschreibung
Explore architectural approaches to building Data Lakes that ingest, index, manage, and analyze massive amounts of data using Big Data technologiesAbout This BookComprehend the intricacies of architecting a Data Lake and build a data strategy around your current data architectureEfficiently manage vast amounts of data and deliver it to multiple applications and systems with a high degree of performance and scalabilityPacked with industry best practices and use-case scenarios to get you up-and-runningWho This Book Is ForThis book is for architects and senior managers who are responsible for building a strategy around their current data architecture, helping them identify the need for a Data Lake implementation in an enterprise context. The reader will need a good knowledge of master data management and information lifecycle management, and experience of Big Data technologies.What You Will LearnIdentify the need for a Data Lake in your enterprise context and learn to architect a Data LakeLearn to build various tiers of a Data Lake, such as data intake, management, consumption, and governance, with a focus on practical implementation scenariosFind out the key considerations to be taken into account while building each tier of the Data LakeUnderstand Hadoop-oriented data transfer mechanism to ingest data in batch, micro-batch, and real-time modesExplore various data integration needs and learn how to perform data enrichment and data transformations using Big Data technologiesEnable data discovery on the Data Lake to allow users to discover the dataDiscover how data is packaged and provisioned for consumptionComprehend the importance of including data governance disciplines while building a Data LakeIn DetailA Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications.This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.Style and approachData Lake Development with Big Data provides architectural approaches to building a Data Lake. It follows a use case-based approach where practical implementation scenarios of each key component are explained. It also helps you understand how these use cases are implemented in a Data Lake. The chapters are organized in a way that mimics the sequential data flow evidenced in a Data Lake.

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Pradeep Pasupuleti has over 17 years of experience in architecting and developing distributed and real-time data-driven systems. Currently, he focuses on developing robust data platforms and data products that are fuelled by scalable machine-learning algorithms, and delivering value to customers in addressing business problems by applying his deep technical insights.Pradeep founded Datatma expressly to humanize Big Data, simplify it, and unravel new value on a previously unimaginable scale in economy and scope. He has created COE (Centers of Excellence) to provide quick wins with data products that analyze high-dimensional multistructured data using scalable natural language processing and deep learning techniques. He has performed roles in technology consulting and advising Fortune 500 companies.Beulah Salome Purra has over 11 years of experience and specializes in building large-scale distributed systems. Her core expertise lies in working on Big Data Analytics. In her current role at ATMECS, her focus is on building robust and scalable data products that extract value from the organization's huge data assets.