Efficient Data Deduplication in Hadoop

Efficient Data Deduplication in Hadoop

Versandkostenfrei!
Versandfertig in 6-10 Tagen
26,99 €
inkl. MwSt.
PAYBACK Punkte
13 °P sammeln!
Hadoop is widely used for massively distributed data storage. Even though it is highly fault tolerant, scalable and runs on commodity hardware, it does not provide efficient and optimized data storage solution. When user uploads files with the same contents in Hadoop, it stores all files to HDFS (Hadoop Distributed File System) even if the contents are same that leads to duplication of contents hence it is wastage of storage space. Data deduplication is process to reduce the required storage capacity as only the unique instances of data get stored. The Data Deduplication process is widely used...