80,24 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
payback
0 °P sammeln
  • Broschiertes Buch

Opportunity and Curiosity find similar rocks on Mars. One can generally understand this statement if one knows that Opportunity and Curiosity are instances of the class of Mars rovers, and recognizes that, as signalled by the word on, rocks are located on Mars. Two mental operations contribute to understanding: recognize how entities/concepts mentioned in a text interact and recall already known facts (which often themselves consist of relations between entities/concepts). Concept interactions one identifies in the text can be added to the repository of known facts, and aid the processing of…mehr

Produktbeschreibung
Opportunity and Curiosity find similar rocks on Mars. One can generally understand this statement if one knows that Opportunity and Curiosity are instances of the class of Mars rovers, and recognizes that, as signalled by the word on, rocks are located on Mars. Two mental operations contribute to understanding: recognize how entities/concepts mentioned in a text interact and recall already known facts (which often themselves consist of relations between entities/concepts). Concept interactions one identifies in the text can be added to the repository of known facts, and aid the processing of future texts. The amassed knowledge can assist many advanced language-processing tasks, including summarization, question answering and machine translation. Semantic relations are the connections we perceive between things which interact. The book explores two, now intertwined, threads in semantic relations: how they are expressed in texts and what role they play in knowledge repositories. A historical perspective takes us back more than 2000 years to their beginnings, and then to developments much closer to our time: various attempts at producing lists of semantic relations, necessary and sufficient to express the interaction between entities/concepts. A look at relations outside context, then in general texts, and then in texts in specialized domains, has gradually brought new insights, and led to essential adjustments in how the relations are seen. At the same time, datasets which encompass these phenomena have become available. They started small, then grew somewhat, then became truly large. The large resources are inevitably noisy because they are constructed automatically. The available corpora-to be analyzed, or used to gather relational evidence-have also grown, and some systems now operate at the Web scale. The learning of semantic relations has proceeded in parallel, in adherence to supervised, unsupervised or distantly supervised paradigms. Detailed analyses of annotated datasets in supervised learning have granted insights useful in developing unsupervised and distantly supervised methods. These in turn have contributed to the understanding of what relations are and how to find them, and that has led to methods scalable to Web-sized textual data. The size and redundancy of information in very large corpora, which at first seemed problematic, have been harnessed to improve the process of relation extraction/learning. The newest technology, deep learning, supplies innovative and surprising solutions to a variety of problems in relation learning. This book aims to paint a big picture and to offer interesting details.
Autorenporträt
Vivi Nastase holds a Ph.D. from the University of Ottawa. A research associate at the Univer sity of Stuttgart, she works mainly on lexical semantics, semantic relations, knowledge acquisi tion, and language evolution.Stan Szpakowicz holds a Ph.D. from Warsaw University, and a D.Sc. from the Institute of Informatics, Polish Academy of Sciences. Now an emeritus professor of Computer Science at the University of Ottawa, he has dabbled in NLP since 1969. His interests in the past several years include semantic relations and lexical resources.Preslav Nakov holds a Ph.D. from the University of California, Berkeley. He leads the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of fake news, propaganda, and media bias.Diarmuid Ó Séagdha holds a Ph.D. from the University of Cambridge. He works for Apple, and is a Visiting Industrial Fellow at the UC's NLIP Research Group. His interests revolve around the application of machine learning techniques to semantic processing tasks.