-21%11
27,95 €
35,30 €**
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
payback
14 °P sammeln
-21%11
27,95 €
35,30 €**
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
14 °P sammeln
Als Download kaufen
35,30 €****
-21%11
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
payback
14 °P sammeln
Jetzt verschenken
35,30 €****
-21%11
27,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
14 °P sammeln
  • Format: PDF

Latent semantic mapping (LSM) is a generalization of latent semantic analysis (LSA), a paradigm originally developed to capture hidden word patterns in a text document corpus. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. It operates under the assumption that there is some latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this structure and get…mehr

Produktbeschreibung
Latent semantic mapping (LSM) is a generalization of latent semantic analysis (LSA), a paradigm originally developed to capture hidden word patterns in a text document corpus. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. It operates under the assumption that there is some latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this structure and get rid of the obscuring ""noise."" This results in a parsimonious continuous parameter description of words and documents, which then replaces the original parameterization in indexing and retrieval. This approach exhibits three main characteristics: -Discrete entities (words and documents) are mapped onto a continuous vector space; -This mapping is determined by global correlation patterns; and -Dimensionality reduction is an integral part of the process. Such fairly generic properties are advantageous in a variety of different contexts, which motivates a broader interpretation of the underlying paradigm. The outcome (LSM) is a data-driven framework for modeling meaningful global relationships implicit in large volumes of (not necessarily textual) data. This monograph gives a general overview of the framework, and underscores the multifaceted benefits it can bring to a number of problems in natural language understanding and spoken language processing. It concludes with a discussion of the inherent tradeoffs associated with the approach, and some perspectives on its general applicability to data-driven information extraction. Contents: I. Principles / Introduction / Latent Semantic Mapping / LSM Feature Space / Computational Effort / Probabilistic Extensions / II. Applications / Junk E-mail Filtering / Semantic Classification / Language Modeling / Pronunciation Modeling / Speaker Verification / TTS Unit Selection / III. Perspectives / Discussion / Conclusion / Bibliography

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Jerome R. Bellegarda received the Diplome dIngenieur degree (summa cum laude) from the Ecole Nationale Superieure dElectricite et de Mecanique, Nancy, France, in 1984, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Rochester, Rochester, NY, in 1984 and 1987, respectively. From 1988 to 1994, he was a Research Staff Member at the IBM T.J. Watson Research Center, Yorktown Heights, NY, working on speech and handwriting recognition, particularly acoustic and chirographic modeling. In 1994, he joined Apple Inc., Cupertino, CA, where he is currently Apple Distinguished Scientist in Speech & Language Technologies. At Apple he has worked on many facets of human language processing, including speech recognition, speech synthesis, statistical language modeling, voice authentication, speaker adaptation, dialog interaction, metadata extraction, and semantic classification. In these areas he has written close to 150 journal and conference papers, and holds over 30 patents. He has also contributed chapters to several edited books, most recently Pattern Recognition in Speech and Language Processing (New York, NY: CRC Press, 2003), and Mathematical Foundations of Speech and Language Processing (New York, NY: Springer-Verlag, 2004). His research interests include statistical modeling algorithms, voice-driven man-machine communications, multiple input/output modalities, and multimedia knowledge management. Dr. Bellegarda has served on many international scientific committees, review panels, and editorial boards.