Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and wrong business decisions. According to a report by InsightSquared in 2012, poor data across businesses and the government cost the United States economy 3.1 trillion dollars a year. To detect data errors, data quality rules or integrity constraints (ICs) have been proposed as a declarative way to describe legal or correct data instances. Any subset of data that does not conform to the defined rules is considered erroneous, which is also referred to as a violation. Various kinds of data repairing techniques with different objectives have been introduced where algorithms are used to detect subsets of the data that violate the declared integrity constraints, and even to suggest updates to the database such that the new database instance conforms with these constraints. While some of these algorithms aim to minimally change the database, others involve human experts or knowledge bases to verify the repairs suggested by the automatic repeating algorithms. Trends in Cleaning Relational Data: Consistency and Deduplication discusses the main facets and directions in designing error detection and repairing techniques. It proposes a taxonomy of current anomaly detection techniques, including error types, the automation of the detection process, and error propagation. It also sets out a taxonomy of current data repairing techniques, including the repair target, the automation of the repair process, and the update model. It concludes by highlighting current trends in "big data" cleaning.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.