105,48 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
  • Broschiertes Buch

Content-based image retrieval (CBIR) aims for finding images in large databases such as the internet based on their content. Given an exemplary query image provided by the user, the retrieval system provides a ranked list of similar images. Most contemporary CBIR systems compare images solely by means of their visual similarity, i.e., the occurrence of similar textures and the composition of colors. However, visual similarity does not necessarily coincide with semantic similarity. For example, images of butterflies and caterpillars can be considered as similar, because the caterpillar turns…mehr

Produktbeschreibung
Content-based image retrieval (CBIR) aims for finding images in large databases such as the internet based on their content. Given an exemplary query image provided by the user, the retrieval system provides a ranked list of similar images. Most contemporary CBIR systems compare images solely by means of their visual similarity, i.e., the occurrence of similar textures and the composition of colors. However, visual similarity does not necessarily coincide with semantic similarity. For example, images of butterflies and caterpillars can be considered as similar, because the caterpillar turns into a butterfly at some point in time. Visually, however, they do not have much in common. In this work, we propose to integrate such human prior knowledge about the semantics of the world into deep learning techniques. Class hierarchies serve as a source for this knowledge, which are readily available for a plethora of domains and encode is-a relationships (e.g., a poodle is a dog is an animal etc.). Our hierarchy-based semantic embeddings improve the semantic consistency of CBIR results substantially compared to conventional image representations and features. We furthermore present three different mechanisms for interactive image retrieval by incorporating user feedback to resolve the inherent semantic ambiguity present in the query image. One of the proposed methods reduces the required user feedback to a single click using clustering, while another keeps the human in the loop by actively asking for feedback regarding those images which are expected to improve the relevance model the most. The third method allows the user to select particularly interesting regions in images. These techniques yield more relevant results after a few rounds of feedback, which reduces the total amount of retrieved images the user needs to inspect to find relevant ones.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.