117,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
payback
59 °P sammeln
  • Gebundenes Buch

Empirical - data-driven, neural network-based, probabilistic, and statistical - methods seem to be the modern trend. Recently, OpenAI's ChatGPT, Google's Bard and Microsoft's Sydney chatbots have been garnering a lot of attention for their detailed answers across many knowledge domains. In consequence, most AI researchers are no longer interested in trying to understand what common intelligence is or how intelligent agents construct scenarios to solve various problems. Instead, they now develop systems that extract solutions from massive databases used as cheat sheets. In the same manner,…mehr

Produktbeschreibung
Empirical - data-driven, neural network-based, probabilistic, and statistical - methods seem to be the modern trend. Recently, OpenAI's ChatGPT, Google's Bard and Microsoft's Sydney chatbots have been garnering a lot of attention for their detailed answers across many knowledge domains. In consequence, most AI researchers are no longer interested in trying to understand what common intelligence is or how intelligent agents construct scenarios to solve various problems. Instead, they now develop systems that extract solutions from massive databases used as cheat sheets. In the same manner, Natural Language Processing (NLP) software that uses training corpora associated with empirical methods are trendy, as most researchers in NLP today use large training corpora, always to the detriment of the development of formalized dictionaries and grammars.

Not questioning the intrinsic value of many software applications based on empirical methods, this volume aims at rehabilitating the linguistic approach to NLP. In an introduction, the editor uncovers several limitations and flaws of using training corpora to develop NLP applications, even the simplest ones, such as automatic taggers.
The first part of the volume is dedicated to showing how carefully handcrafted linguistic resources could be successfully used to enhance current NLP software applications. The second part presents two representative cases where data-driven approaches cannot be implemented simply because there is not enough data available for low-resource languages. The third part addresses the problem of how to treat multiword units in NLP software, which is arguably the weakest point of NLP applications today but has a simple and elegant linguistic solution.

It is the editor's belief that readers interested in Natural Language Processing will appreciate the importance of this volume, both for its questioning of the training corpus-based approaches and for the intrinsic valueof the linguistic formalization and the underlying methodology presented.


Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Max Silberztein is a Professor of Linguistics, Computational Linguistics and Computer Science at the Université de Franche-Comté. He is the author of the three NLP software platforms (INTEX, NooJ and ATISHS), two books (Dictionnaires électroniques et analyse automatique de textes: le système INTEX, Masson 1993; Formalizing Natural Languages: the NooJ approach, Wiley 2016), and editor of over 15 volumes of selected Proceedings in Springer CCIS and LNCS series.