37,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
payback
19 °P sammeln
  • Broschiertes Buch

Information extraction (IE) and text summarization (TS) are powerful technologies for finding relevant pieces of information in text and presenting them to the user in condensed form. The ongoing information explosion makes IE and TS critical for successful functioning within the information society.
These technologies face particular challenges due to the inherent multi-source nature of the information explosion. The technologies must now handle not isolated texts or individual narratives, but rather large-scale repositories and streams---in general, in multiple languages---containing a
…mehr

Produktbeschreibung
Information extraction (IE) and text summarization (TS) are powerful technologies for finding relevant pieces of information in text and presenting them to the user in condensed form. The ongoing information explosion makes IE and TS critical for successful functioning within the information society.

These technologies face particular challenges due to the inherent multi-source nature of the information explosion. The technologies must now handle not isolated texts or individual narratives, but rather large-scale repositories and streams---in general, in multiple languages---containing a multiplicity of perspectives, opinions, or commentaries on particular topics, entities or events. There is thus a need to adapt existing techniques and develop new ones to deal with these challenges.

This volume contains a selection of papers that present a variety of methodologies for content identification and extraction, as well as for content fusion and regeneration. The chapters cover various aspects of the challenges, depending on the nature of the information sought---names vs. events,--- and the nature of the sources---news streams vs. image captions vs. scientific research papers, etc. This volume aims to offer a broad and representative sample of studies from this very active research field.
Autorenporträt
*Thierry Poibeau* holds a PhD and an Habilitation in Computer Science from the University Paris 13.  From 1998 to 2003, he has worked for Thales Research and Technology, where he was responsible for research activities in information extraction.  Since 2003, he is a CNRS research fellow, working first at the Laboratoire d'Informatique de Paris-Nord (LIPN) and now at the LaTTiCe laboratory. He is also an affiliated lecturer at the Research Centre for English and Applied Linguistics (RCEAL) of the University of Cambridge (UK).  Thierry Poibeau has managed and/or participated in several national and European projects related to his research areas.  He has published one book on information extraction, 3 international patents and more than 50 papers in books, international journals and conferences.  He has organised several international workshops and acted as programme committee member for over 20 international conferences (e.g. IJCAI, COGSCI, COLING) and associated workshops. *Horacio Saggion* is a Research Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.  He obtained his PhD in computer Science from University of Montreal in 2000.  He works in the areas of information extraction, text summarization, and semantic analysis.  He has published over 50 works in journals, international conferences, workshops, and books.  He has been principal researcher and manager for a number of national and international projects, and organized a series of workshops in tha areas of information extraction and summarization. He has also acted in scientific committees for international conferences in Human Language Technology. *Jakub Pislorski* received his M.Sc in Computer Science from the University of Saarbrücken, Germany in 1994 and PhD from the Polish Academy of Sciences in Warsaw, Poland in 2002. Jakub is a Research Associate at the Polish Academy of Sciences and he is also managingprojects related to NLP in the R&D Unit of the Warsaw-based EU Border Security Agency Frontex. Previously he held the post of a Research Fellow at the Joint Research Centre of the European Commission in Ispra, Italy, and worked as a Senior Software Engineer and Researcher at the German Research Centre for Artificial Intelligence in Saarbruecken and the Department of Information Systems at Poznan University of Economics. He also has been consulting several companies on information extraction technology. His main areas of interest are centered around information extraction, finite-state methods in NLP, shallow text processing, efficient multilingual application-oriented NLP solutions. Jakub is author and co-author of around 80 peer-reviewed international conference and workshop papers, journal articles and book chapters in Computer Science and Computational Linguistics. He has co-organizered several scientific events and served as a program committee member for a number of international scientific events. *Roman Yangarber* obtained his MS and PhD in 2000 at New York University (NYU), USA, in Computer Science, with concentration on Computational Linguistics.  Prior to moving to Finland in 2004, he held the post of Assistant Research Professor at the Courant Institute of Mathematical Sciences at NYU, where he specialized in Natural Language Processing.  His main research area has been machine learning for automatic acquisition of semantic knowledge from plain text, in particular, from large news streams. He has been an organizer, editorial board member and program committee member for a number international scientific events, conferences, organizations and journals.  He has authored or co-authored over 40 papers in Computational Linguistics.  At the University of Helsinki, he has held the post of Acting Professor, and currently leads two research projects, and participates in two others (nationally- and EU-funded), in text mining and linguistic analysis,where he also supervises PhD and MS students.
Rezensionen
From the reviews:

"This book is a compilation of chapters selected from a series of papers presented at Multi-source, Multilingual Information Extraction and Summarization (MMIES), a workshop series on these two topics. ... This book could be useful for researchers and technicians interested in advances in these fields." (Mercedes Martínez González, ACM Computing Reviews, March, 2013)