Normal view MARC view ISBD view

Similarity Joins in Relational Database Systems [electronic resource] / by Nikolaus Augsten, Michael Bohlen.

By: Augsten, Nikolaus [author.].
Contributor(s): Bohlen, Michael [author.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Data Management: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2014Edition: 1st ed. 2014.Description: XVII, 106 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031018510.Subject(s): Computer networks  | Data structures (Computer science) | Information theory | Computer Communication Networks | Data Structures and Information TheoryAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 004.6 Online resources: Click here to access online
Contents:
Preface -- Acknowledgments -- Introduction -- Data Types -- Edit-Based Distances -- Token-Based Distances -- Query Processing Techniques -- Filters for Token Equality Joins -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
In: Springer Nature eBookSummary: State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Acknowledgments -- Introduction -- Data Types -- Edit-Based Distances -- Token-Based Distances -- Query Processing Techniques -- Filters for Token Equality Joins -- Conclusion -- Bibliography -- Authors' Biographies -- Index.

State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.

There are no comments for this item.

Log in to your account to post a comment.