-21%11
27,95 €
35,30 €**
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
payback
14 °P sammeln
-21%11
27,95 €
35,30 €**
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
14 °P sammeln
Als Download kaufen
35,30 €****
-21%11
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
payback
14 °P sammeln
Jetzt verschenken
35,30 €****
-21%11
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
14 °P sammeln
  • Format: PDF

State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance…mehr

Produktbeschreibung
State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Nikolaus Augsten is a professor in the Department of Com puter Science at the University of Salzburg, Austria, where he heads the Database Group. He received his Ph.D. degree in computer science from Aalborg University, Denmark, in 2008, and holds a M.Sc. degree from Graz University of Technol ogy, Austria. Prior to joining the University of Salzburg in 2013, he was an assistant professor at the Free University of Bolzano, Italy. He was on leave at TU München, Germany, in 2010/2011 and visited Washington State University for six months in 2005/2006. His main research interests include sim ilarity search queries over massive data collections, approximate matching techniques for complex data structures, efficient in dex structures for distance computations, and top-k queries. For his work on top-k approximate subtree matching he received the Best Paper Award at the IEEE International Conference on Data Engineering in 2010. Currently, he serves as an Associate Editor for the VLDB Journal. Michael H. Böhlen is a professor of computer science at the University of Zürich where he heads the Database Technology Group. His research interests include various aspects of data management, and have focused on time-varying information, data warehousing and data analysis, and similarity search. He received his M.Sc. and Ph.D. degrees from ETH Zürich in 1990 and 1994, respectively. Before joining the University of Zürich he visited the University of Arizona for one year, and was a faculty member at Aalborg University for eight years and the Free University of Bozen-Bolzano for six years. He was pro gram co-chair of the 39th International Conference on Very Large Data Bases and served as an Associate Editor for the VLDB Journal. He served as a PC member for SIGMOD, VLDB, ICDE, and EDBT. Cur rently, he serves as an Associate Editor for ACM TODS, and he is a member of the VLDB Endowment's Board of Trustees