Johannes Ledolter, Lea S. VanderVelde
Analyzing Textual Information
From Words to Meanings through Numbers
Johannes Ledolter, Lea S. VanderVelde
Analyzing Textual Information
From Words to Meanings through Numbers
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Researchers in the social sciences and beyond are dealing more and more with massive quantities of text data requiring analysis, from historical letters to the constant stream of content in social media. Traditional texts on statistical analysis have focused on numbers, but this book will provide a practical introduction to the quantitative analysis of textual data. Using up-to-date R methods, this book will take readers through the text analysis process, from text mining and pre-processing the text to final analysis. It includes two major case studies using historical and more contemporary…mehr
Andere Kunden interessierten sich auch für
- Analyzing and Interpreting Qualitative Research108,99 €
- John Fox (Canada McMaster University)A Mathematical Primer for Social Statistics48,99 €
- Matthijs Koopmans (USA Mercy College)Using Time Series to Analyze Long-Range Fractal Patterns48,99 €
- Haiyan BaiPropensity Score Methods and Applications48,99 €
- Michael SmithsonGeneralized Linear Models for Bounded and Limited Quantitative Variables48,99 €
- Engelhard, George, Jr.Rasch Models for Solving Measurement Problems45,99 €
- Wes BonifayMultidimensional Item Response Theory50,99 €
-
-
-
Researchers in the social sciences and beyond are dealing more and more with massive quantities of text data requiring analysis, from historical letters to the constant stream of content in social media. Traditional texts on statistical analysis have focused on numbers, but this book will provide a practical introduction to the quantitative analysis of textual data. Using up-to-date R methods, this book will take readers through the text analysis process, from text mining and pre-processing the text to final analysis. It includes two major case studies using historical and more contemporary text data to demonstrate the practical applications of these methods. Currently, there is no introductory how-to book on textual data analysis with R that is up-to-date and applicable across the social sciences. Code and a variety of additional resources are available on an accompanying website for the book.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Quantitative Applications in the Social Sciences
- Verlag: SAGE Publications Inc
- Seitenzahl: 192
- Erscheinungstermin: 13. Juli 2021
- Englisch
- Abmessung: 213mm x 138mm x 11mm
- Gewicht: 230g
- ISBN-13: 9781544390000
- ISBN-10: 1544390009
- Artikelnr.: 61111113
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- Quantitative Applications in the Social Sciences
- Verlag: SAGE Publications Inc
- Seitenzahl: 192
- Erscheinungstermin: 13. Juli 2021
- Englisch
- Abmessung: 213mm x 138mm x 11mm
- Gewicht: 230g
- ISBN-13: 9781544390000
- ISBN-10: 1544390009
- Artikelnr.: 61111113
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
JOHANNES LEDOLTER has professorships in both the Business School, where he is Robert Thomas Holmes Professor of Business Analytics, and in the Department of Statistics and Actuarial Science at the University of Iowa. He is a Fellow of the American Statistical Association and the American Society for Quality, and Elected Member of the International Statistical Institute. He is the author of several books, including Statistical Methods for Forecasting, Introduction to Regression Modeling, Testing 1-2-3: Experimental Design with Applications in Marketing and Service Operations, and Data Mining and Business Analytics with R. He was Professor of Statistics at the Vienna University of Economics and Business from 1997 to 2015, and held visiting professorships at Princeton, Yale, Stanford and the University of Chicago. Since 2011, he has been Associate Investigator at the Center for Prevention and Treatment of Vision Loss at the Iowa City VA Health Care System, which studies optic nerve and retinal disorders in relation to traumatic brain injury. Professor Ledolter enjoys working on multi-disciplinary projects that involve both numeric and text information.
Series Editor's Introduction
Preface
Acknowledgments
About the Authors
Chapter 1: Introduction
1.1 Text Data
1.2 The Two Applications Considered in This Book
1.3 Introductory Example and Its Analysis Using the R Statistical Software
1.4 The Introductory Example Revisited, Illustrating Concordance and
Collocation Using Alternative Software
1.5 Concluding Remarks
1.6 References
Chapter 2: A Description of the Studied Text Corpora and A Discussion of
Our Modeling Strategy
2.1 Introduction to the Corpora: Selecting the Texts
2.2 Debates of the 39th U.S. Congress, as recorded in the Congressional
Globe
2.3 The Territorial Papers of the United States
2.4 Analyzing Text Data: Bottom-Up or Top-Down Analysis
2.5 References
Appendix to Chapter 2: The Complete Congressional Record
Chapter 3: Preparing Text for Analysis: Text Cleaning and Formatting
3.1 Text Cleaning
3.2 Text Formatting
3.3 Concluding Remarks
3.4 References
Chapter 4: Word Distributions: Document-Term Matrices of Word Frequencies
and the "Bag of Words" Representation
4.1 Document-Term Matrices of Frequencies
4.2 Displaying Word Frequencies
4.3 Co-Occurrence of Terms in the Same Document
4.4 The Zipf Law: An Interesting Fact About the Distribution of Word
Frequencies
4.5 References
Chapter 5: Metavariables and Text Analysis Stratified on Metavariables
5.1 The Significance of Stratification and the Importance of Metavariables
5.2 Analysis of the Territorial Papers
5.3 Analysis of Speeches From the 39th Congress
5.4 References
Chapter 6: Sentiment Analysis
6.1 Lexicons of Sentiment-Charged Words
6.2 Applying Sentiment Analysis to the Letters of the Territorial Papers
6.3 Using Other Sentiment Dictionaries and the R Software tidytext for
Sentiment Analysis
6.4 Concluding Remarks: An Alternative Approach for Sentiment Analysis
6.5 References
Chapter 7: Clustering of Documents
7.1 Clustering Documents
7.2 Measures for the Closeness and the Distance of Documents
7.3 Methods for Clustering Documents
7.4 Illustrating Clustering Methods on a Simulated Example
7.5 References
Chapter 8: Classification of Documents
8.1 Introduction
8.2 Classification Procedures
8.3 Two Examples Using the Congressional Speech Database
8.4 Concluding Remarks on Authorship Attribution: Commenting on the Field
of Stylometry
8.5 References
Chapter 9: Modeling Text Data: Topic Models
9.1 Topic Models
9.2 Fitting Topic Models to the Two Corpora Studied in This Book
9.3 References
Chapter 10: n-Grams and Other Ways of Analyzing Adjacent Words
10.1 Analysis of Bigrams
10.2 Text Windows to Measure Word Associations Within a Neighborhood of
Words and a Discussion of the R Package text2vec
10.3 Illustrating the Use of n-Grams: Speeches of the 39th Congress
Chapter 11: Concluding Remarks
Appendix: Listing of Website Resources
Preface
Acknowledgments
About the Authors
Chapter 1: Introduction
1.1 Text Data
1.2 The Two Applications Considered in This Book
1.3 Introductory Example and Its Analysis Using the R Statistical Software
1.4 The Introductory Example Revisited, Illustrating Concordance and
Collocation Using Alternative Software
1.5 Concluding Remarks
1.6 References
Chapter 2: A Description of the Studied Text Corpora and A Discussion of
Our Modeling Strategy
2.1 Introduction to the Corpora: Selecting the Texts
2.2 Debates of the 39th U.S. Congress, as recorded in the Congressional
Globe
2.3 The Territorial Papers of the United States
2.4 Analyzing Text Data: Bottom-Up or Top-Down Analysis
2.5 References
Appendix to Chapter 2: The Complete Congressional Record
Chapter 3: Preparing Text for Analysis: Text Cleaning and Formatting
3.1 Text Cleaning
3.2 Text Formatting
3.3 Concluding Remarks
3.4 References
Chapter 4: Word Distributions: Document-Term Matrices of Word Frequencies
and the "Bag of Words" Representation
4.1 Document-Term Matrices of Frequencies
4.2 Displaying Word Frequencies
4.3 Co-Occurrence of Terms in the Same Document
4.4 The Zipf Law: An Interesting Fact About the Distribution of Word
Frequencies
4.5 References
Chapter 5: Metavariables and Text Analysis Stratified on Metavariables
5.1 The Significance of Stratification and the Importance of Metavariables
5.2 Analysis of the Territorial Papers
5.3 Analysis of Speeches From the 39th Congress
5.4 References
Chapter 6: Sentiment Analysis
6.1 Lexicons of Sentiment-Charged Words
6.2 Applying Sentiment Analysis to the Letters of the Territorial Papers
6.3 Using Other Sentiment Dictionaries and the R Software tidytext for
Sentiment Analysis
6.4 Concluding Remarks: An Alternative Approach for Sentiment Analysis
6.5 References
Chapter 7: Clustering of Documents
7.1 Clustering Documents
7.2 Measures for the Closeness and the Distance of Documents
7.3 Methods for Clustering Documents
7.4 Illustrating Clustering Methods on a Simulated Example
7.5 References
Chapter 8: Classification of Documents
8.1 Introduction
8.2 Classification Procedures
8.3 Two Examples Using the Congressional Speech Database
8.4 Concluding Remarks on Authorship Attribution: Commenting on the Field
of Stylometry
8.5 References
Chapter 9: Modeling Text Data: Topic Models
9.1 Topic Models
9.2 Fitting Topic Models to the Two Corpora Studied in This Book
9.3 References
Chapter 10: n-Grams and Other Ways of Analyzing Adjacent Words
10.1 Analysis of Bigrams
10.2 Text Windows to Measure Word Associations Within a Neighborhood of
Words and a Discussion of the R Package text2vec
10.3 Illustrating the Use of n-Grams: Speeches of the 39th Congress
Chapter 11: Concluding Remarks
Appendix: Listing of Website Resources
Series Editor's Introduction
Preface
Acknowledgments
About the Authors
Chapter 1: Introduction
1.1 Text Data
1.2 The Two Applications Considered in This Book
1.3 Introductory Example and Its Analysis Using the R Statistical Software
1.4 The Introductory Example Revisited, Illustrating Concordance and
Collocation Using Alternative Software
1.5 Concluding Remarks
1.6 References
Chapter 2: A Description of the Studied Text Corpora and A Discussion of
Our Modeling Strategy
2.1 Introduction to the Corpora: Selecting the Texts
2.2 Debates of the 39th U.S. Congress, as recorded in the Congressional
Globe
2.3 The Territorial Papers of the United States
2.4 Analyzing Text Data: Bottom-Up or Top-Down Analysis
2.5 References
Appendix to Chapter 2: The Complete Congressional Record
Chapter 3: Preparing Text for Analysis: Text Cleaning and Formatting
3.1 Text Cleaning
3.2 Text Formatting
3.3 Concluding Remarks
3.4 References
Chapter 4: Word Distributions: Document-Term Matrices of Word Frequencies
and the "Bag of Words" Representation
4.1 Document-Term Matrices of Frequencies
4.2 Displaying Word Frequencies
4.3 Co-Occurrence of Terms in the Same Document
4.4 The Zipf Law: An Interesting Fact About the Distribution of Word
Frequencies
4.5 References
Chapter 5: Metavariables and Text Analysis Stratified on Metavariables
5.1 The Significance of Stratification and the Importance of Metavariables
5.2 Analysis of the Territorial Papers
5.3 Analysis of Speeches From the 39th Congress
5.4 References
Chapter 6: Sentiment Analysis
6.1 Lexicons of Sentiment-Charged Words
6.2 Applying Sentiment Analysis to the Letters of the Territorial Papers
6.3 Using Other Sentiment Dictionaries and the R Software tidytext for
Sentiment Analysis
6.4 Concluding Remarks: An Alternative Approach for Sentiment Analysis
6.5 References
Chapter 7: Clustering of Documents
7.1 Clustering Documents
7.2 Measures for the Closeness and the Distance of Documents
7.3 Methods for Clustering Documents
7.4 Illustrating Clustering Methods on a Simulated Example
7.5 References
Chapter 8: Classification of Documents
8.1 Introduction
8.2 Classification Procedures
8.3 Two Examples Using the Congressional Speech Database
8.4 Concluding Remarks on Authorship Attribution: Commenting on the Field
of Stylometry
8.5 References
Chapter 9: Modeling Text Data: Topic Models
9.1 Topic Models
9.2 Fitting Topic Models to the Two Corpora Studied in This Book
9.3 References
Chapter 10: n-Grams and Other Ways of Analyzing Adjacent Words
10.1 Analysis of Bigrams
10.2 Text Windows to Measure Word Associations Within a Neighborhood of
Words and a Discussion of the R Package text2vec
10.3 Illustrating the Use of n-Grams: Speeches of the 39th Congress
Chapter 11: Concluding Remarks
Appendix: Listing of Website Resources
Preface
Acknowledgments
About the Authors
Chapter 1: Introduction
1.1 Text Data
1.2 The Two Applications Considered in This Book
1.3 Introductory Example and Its Analysis Using the R Statistical Software
1.4 The Introductory Example Revisited, Illustrating Concordance and
Collocation Using Alternative Software
1.5 Concluding Remarks
1.6 References
Chapter 2: A Description of the Studied Text Corpora and A Discussion of
Our Modeling Strategy
2.1 Introduction to the Corpora: Selecting the Texts
2.2 Debates of the 39th U.S. Congress, as recorded in the Congressional
Globe
2.3 The Territorial Papers of the United States
2.4 Analyzing Text Data: Bottom-Up or Top-Down Analysis
2.5 References
Appendix to Chapter 2: The Complete Congressional Record
Chapter 3: Preparing Text for Analysis: Text Cleaning and Formatting
3.1 Text Cleaning
3.2 Text Formatting
3.3 Concluding Remarks
3.4 References
Chapter 4: Word Distributions: Document-Term Matrices of Word Frequencies
and the "Bag of Words" Representation
4.1 Document-Term Matrices of Frequencies
4.2 Displaying Word Frequencies
4.3 Co-Occurrence of Terms in the Same Document
4.4 The Zipf Law: An Interesting Fact About the Distribution of Word
Frequencies
4.5 References
Chapter 5: Metavariables and Text Analysis Stratified on Metavariables
5.1 The Significance of Stratification and the Importance of Metavariables
5.2 Analysis of the Territorial Papers
5.3 Analysis of Speeches From the 39th Congress
5.4 References
Chapter 6: Sentiment Analysis
6.1 Lexicons of Sentiment-Charged Words
6.2 Applying Sentiment Analysis to the Letters of the Territorial Papers
6.3 Using Other Sentiment Dictionaries and the R Software tidytext for
Sentiment Analysis
6.4 Concluding Remarks: An Alternative Approach for Sentiment Analysis
6.5 References
Chapter 7: Clustering of Documents
7.1 Clustering Documents
7.2 Measures for the Closeness and the Distance of Documents
7.3 Methods for Clustering Documents
7.4 Illustrating Clustering Methods on a Simulated Example
7.5 References
Chapter 8: Classification of Documents
8.1 Introduction
8.2 Classification Procedures
8.3 Two Examples Using the Congressional Speech Database
8.4 Concluding Remarks on Authorship Attribution: Commenting on the Field
of Stylometry
8.5 References
Chapter 9: Modeling Text Data: Topic Models
9.1 Topic Models
9.2 Fitting Topic Models to the Two Corpora Studied in This Book
9.3 References
Chapter 10: n-Grams and Other Ways of Analyzing Adjacent Words
10.1 Analysis of Bigrams
10.2 Text Windows to Measure Word Associations Within a Neighborhood of
Words and a Discussion of the R Package text2vec
10.3 Illustrating the Use of n-Grams: Speeches of the 39th Congress
Chapter 11: Concluding Remarks
Appendix: Listing of Website Resources