26,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Hadoop is widely used for massively distributed data storage. Even though it is highly fault tolerant, scalable and runs on commodity hardware, it does not provide efficient and optimized data storage solution. When user uploads files with the same contents in Hadoop, it stores all files to HDFS (Hadoop Distributed File System) even if the contents are same that leads to duplication of contents hence it is wastage of storage space. Data deduplication is process to reduce the required storage capacity as only the unique instances of data get stored. The Data Deduplication process is widely used…mehr

Produktbeschreibung
Hadoop is widely used for massively distributed data storage. Even though it is highly fault tolerant, scalable and runs on commodity hardware, it does not provide efficient and optimized data storage solution. When user uploads files with the same contents in Hadoop, it stores all files to HDFS (Hadoop Distributed File System) even if the contents are same that leads to duplication of contents hence it is wastage of storage space. Data deduplication is process to reduce the required storage capacity as only the unique instances of data get stored. The Data Deduplication process is widely used in File Server, Database management systems, Backup storage and lots of other storage solutions. A proper Deduplication strategy sufficiently utilizes the storage space under the limited storage devices. Hadoop doesn't provide Data Deduplication solution. In this work the module of deduplication has been integrated in Hadoop framework to achieve optimized data storage.
Autorenporträt
Priteshkumar Prajapati has received his B.E. and M.Tech (Gold Medal) Degrees in 2012 and 2014 respectively, Department of Information Technology,CITC,Changa and CSPIT,Changa from G.T.U. and CHARUSAT University. He is currently working as an Assistant Professor, in Department of Information Technology,CSPIT,CHARUSAT, Changa.