The Routledge International Handbook of Automated Essay Evaluation (eBook, PDF)
Redaktion: Shermis, Mark D.; Wilson, Joshua
46,95 €
46,95 €
inkl. MwSt.
Sofort per Download lieferbar
23 °P sammeln
46,95 €
Als Download kaufen
46,95 €
inkl. MwSt.
Sofort per Download lieferbar
23 °P sammeln
Jetzt verschenken
Alle Infos zum eBook verschenken
46,95 €
inkl. MwSt.
Sofort per Download lieferbar
Alle Infos zum eBook verschenken
23 °P sammeln
The Routledge International Handbook of Automated Essay Evaluation (eBook, PDF)
Redaktion: Shermis, Mark D.; Wilson, Joshua
- Format: PDF
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
This is a definitive guide at the intersection of automation, artificial intelligence, and education. This volume encapsulates the ongoing advancement of AEE, reflecting its application in both large-scale and classroom-based assessments to support teaching and learning endeavours.
- Geräte: PC
- mit Kopierschutz
- eBook Hilfe
This is a definitive guide at the intersection of automation, artificial intelligence, and education. This volume encapsulates the ongoing advancement of AEE, reflecting its application in both large-scale and classroom-based assessments to support teaching and learning endeavours.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis
- Seitenzahl: 646
- Erscheinungstermin: 27. Juni 2024
- Englisch
- ISBN-13: 9781040033241
- Artikelnr.: 70568500
- Verlag: Taylor & Francis
- Seitenzahl: 646
- Erscheinungstermin: 27. Juni 2024
- Englisch
- ISBN-13: 9781040033241
- Artikelnr.: 70568500
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Mark D. Shermis was the principal investigator and academic advisor for the Automated Student Assessment Prize. He is currently Principal for Performance Assessment Analytics, LLC. Dr. Shermis has also held faculty and administrative positions at the University of Houston-Clear Lake, University of Akron, University of Florida, Florida International, Indiana University-Purdue University Indianapolis (IUPUI), and the University of Texas, USA. He is a frequently cited expert on machine scoring and co-author (with Frank DiVesta) of Classroom Assessment in Action. Joshua Wilson is Associate Professor in the School of Education at the University of Delaware, USA. He researches ways that automation and artificial intelligence can improve assessment, teaching, and learning with a specific focus on automated feedback, automated scoring, and automated writing evaluation. Notably, his research has attracted the support of sponsors such as the Institute of Education Sciences of the U.S. Department of Education, the Spencer Foundation, and the Bill and Melinda Gates Foundation.
Foreword
Jill Burstein
Section 1: Introduction to AEE and Modern AEE Systems
1. Introduction to Automated Evaluation
Mark D. Shermis and Joshua Wilson
2. Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand
Scoring in the Summative Assessment Program
Corey Palermo and Arianto Wibowo
3. Exploration of the Stacking Ensemble Learning Algorithm for Automated
Scoring of Constructed-Response Items in Reading Assessment
Hong Jiao, Shuangshuang Xu, and Manqian Liao
4. Scoring Essays Written in Persian Using a Transformer-Based Model:
Implications for Multilingual AES
Tahereh Firoozi and Mark J. Gierl
5. SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese
Foreign Language Learners
Tao-Hsing Chang and Yao-Ting Sung
6. NLP Application in the Hebrew Language for Assessment and Learning
Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Tzur
Karelitiz, and Effi Levi
Section 2: Expanding Automated Evaluation: Reading, Speech, Mathematics,
and Writing Research
7. Automated Scoring for NAEP Short-Form Constructed Responses in Reading
Mark D. Shermis
8. Automated Scoring and Feedback for Spoken Language
Klaus Zechner and Ching-Ni Hsieh
9. Automated Scoring of Math Constructed-Response Items
Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy, and Lee
Becker
10. We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale
Writing Research Projects
Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus
Hruska, and Pablo Pirnay-Dummer
Section 3: Innovations in Automated Writing Evaluation
11. Exploring the Role of Automated Writing Evaluation as a Formative
Assessment Tool Supporting Self-Regulated Learning in Writing
Joshua Wilson and Charles MacArthur
12. Supporting Students' Text-Based Evidence Use via Formative Automated
Writing and Revision Assessment
Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman,
Zhexiong Liu, and Tianwen Li
13. The Use of AWE in Non-English Majors: Student Responses to Automated
Feedback and the Impact of Feedback Accuracy
Aysel Saricaoglu and Zeynep Bilki
14. Relationships Between Middle-School Teachers' Perceptions and
Application of Automated Writing Evaluation and Student Performance
Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero,
Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles, and Saimou
Zhang
15. Automated Writing Trait Analysis
Paul Deane
16. Advances in Automating Feedback for Argumentative Writing: Feedback
Prize as a Case Study
Perpetual Baffour and Scott Crossley
17. Automated Feedback in Formative Assessment
Harry A. Layman
Section 4: Factors Affecting the Performance of Automated Evaluation
18. Using Automated Scoring to Support Rating Quality Analyses for Human
Raters
Stefanie A. Wind
19. Calibrating and Evaluating Automated Scoring Engines and Human Raters
over Time Using Measurement Models
Stefanie A. Wind and Yangmeng Xu
20. AI Scoring and Writing Fairness
Mark D. Shermis
21. Automating Bias in Writing Evaluation: Sources, Barriers, and
Recommendations
Maria Goldshtein, Amin G. Alhashim, and Rod D. Roscoe
22. Explainable AI and AWE: Balancing Tensions between Transparency and
Predictive Accuracy
David Boulanger and Vivekanandan Suresh Kumar
23. Validity Argument Roadmap for Automated Scoring
David Dorsey, Hillary Michaels, and Steve Ferrara
Section 5: Technological Innovations: "Where Do We Go From Here?"
24. Redesigning Automated Scoring Engines to Include Deep Learning Models
Sue Lottridge, Chris Ormerod, and Milan Patel
25. Automated Short-Response Scoring for Automated Item Generation in
Science Assessments
Jinnie Shin and Mark J. Gierl
26. Latent Dirichlet Allocation of Constructed Responses
Jordan M. Wheeler, Shiyu Wang, and Allan S. Cohen
27. Computational Language as a Window into Cognitive Functioning
Peter W. Foltz and Chelsea Chandler
28. Expanding AWE to Incorporate Reading and Writing Evaluation
Laura K. Allen, Püren Öncel, and Lauren E. Flynn
29. The Two U's in the Future of Automated Essay Evaluation: Universal
Access and User-Centered Design
Danielle S. McNamara and Andrew Potter
Jill Burstein
Section 1: Introduction to AEE and Modern AEE Systems
1. Introduction to Automated Evaluation
Mark D. Shermis and Joshua Wilson
2. Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand
Scoring in the Summative Assessment Program
Corey Palermo and Arianto Wibowo
3. Exploration of the Stacking Ensemble Learning Algorithm for Automated
Scoring of Constructed-Response Items in Reading Assessment
Hong Jiao, Shuangshuang Xu, and Manqian Liao
4. Scoring Essays Written in Persian Using a Transformer-Based Model:
Implications for Multilingual AES
Tahereh Firoozi and Mark J. Gierl
5. SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese
Foreign Language Learners
Tao-Hsing Chang and Yao-Ting Sung
6. NLP Application in the Hebrew Language for Assessment and Learning
Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Tzur
Karelitiz, and Effi Levi
Section 2: Expanding Automated Evaluation: Reading, Speech, Mathematics,
and Writing Research
7. Automated Scoring for NAEP Short-Form Constructed Responses in Reading
Mark D. Shermis
8. Automated Scoring and Feedback for Spoken Language
Klaus Zechner and Ching-Ni Hsieh
9. Automated Scoring of Math Constructed-Response Items
Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy, and Lee
Becker
10. We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale
Writing Research Projects
Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus
Hruska, and Pablo Pirnay-Dummer
Section 3: Innovations in Automated Writing Evaluation
11. Exploring the Role of Automated Writing Evaluation as a Formative
Assessment Tool Supporting Self-Regulated Learning in Writing
Joshua Wilson and Charles MacArthur
12. Supporting Students' Text-Based Evidence Use via Formative Automated
Writing and Revision Assessment
Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman,
Zhexiong Liu, and Tianwen Li
13. The Use of AWE in Non-English Majors: Student Responses to Automated
Feedback and the Impact of Feedback Accuracy
Aysel Saricaoglu and Zeynep Bilki
14. Relationships Between Middle-School Teachers' Perceptions and
Application of Automated Writing Evaluation and Student Performance
Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero,
Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles, and Saimou
Zhang
15. Automated Writing Trait Analysis
Paul Deane
16. Advances in Automating Feedback for Argumentative Writing: Feedback
Prize as a Case Study
Perpetual Baffour and Scott Crossley
17. Automated Feedback in Formative Assessment
Harry A. Layman
Section 4: Factors Affecting the Performance of Automated Evaluation
18. Using Automated Scoring to Support Rating Quality Analyses for Human
Raters
Stefanie A. Wind
19. Calibrating and Evaluating Automated Scoring Engines and Human Raters
over Time Using Measurement Models
Stefanie A. Wind and Yangmeng Xu
20. AI Scoring and Writing Fairness
Mark D. Shermis
21. Automating Bias in Writing Evaluation: Sources, Barriers, and
Recommendations
Maria Goldshtein, Amin G. Alhashim, and Rod D. Roscoe
22. Explainable AI and AWE: Balancing Tensions between Transparency and
Predictive Accuracy
David Boulanger and Vivekanandan Suresh Kumar
23. Validity Argument Roadmap for Automated Scoring
David Dorsey, Hillary Michaels, and Steve Ferrara
Section 5: Technological Innovations: "Where Do We Go From Here?"
24. Redesigning Automated Scoring Engines to Include Deep Learning Models
Sue Lottridge, Chris Ormerod, and Milan Patel
25. Automated Short-Response Scoring for Automated Item Generation in
Science Assessments
Jinnie Shin and Mark J. Gierl
26. Latent Dirichlet Allocation of Constructed Responses
Jordan M. Wheeler, Shiyu Wang, and Allan S. Cohen
27. Computational Language as a Window into Cognitive Functioning
Peter W. Foltz and Chelsea Chandler
28. Expanding AWE to Incorporate Reading and Writing Evaluation
Laura K. Allen, Püren Öncel, and Lauren E. Flynn
29. The Two U's in the Future of Automated Essay Evaluation: Universal
Access and User-Centered Design
Danielle S. McNamara and Andrew Potter
Foreword
Jill Burstein
Section 1: Introduction to AEE and Modern AEE Systems
1. Introduction to Automated Evaluation
Mark D. Shermis and Joshua Wilson
2. Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand
Scoring in the Summative Assessment Program
Corey Palermo and Arianto Wibowo
3. Exploration of the Stacking Ensemble Learning Algorithm for Automated
Scoring of Constructed-Response Items in Reading Assessment
Hong Jiao, Shuangshuang Xu, and Manqian Liao
4. Scoring Essays Written in Persian Using a Transformer-Based Model:
Implications for Multilingual AES
Tahereh Firoozi and Mark J. Gierl
5. SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese
Foreign Language Learners
Tao-Hsing Chang and Yao-Ting Sung
6. NLP Application in the Hebrew Language for Assessment and Learning
Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Tzur
Karelitiz, and Effi Levi
Section 2: Expanding Automated Evaluation: Reading, Speech, Mathematics,
and Writing Research
7. Automated Scoring for NAEP Short-Form Constructed Responses in Reading
Mark D. Shermis
8. Automated Scoring and Feedback for Spoken Language
Klaus Zechner and Ching-Ni Hsieh
9. Automated Scoring of Math Constructed-Response Items
Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy, and Lee
Becker
10. We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale
Writing Research Projects
Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus
Hruska, and Pablo Pirnay-Dummer
Section 3: Innovations in Automated Writing Evaluation
11. Exploring the Role of Automated Writing Evaluation as a Formative
Assessment Tool Supporting Self-Regulated Learning in Writing
Joshua Wilson and Charles MacArthur
12. Supporting Students' Text-Based Evidence Use via Formative Automated
Writing and Revision Assessment
Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman,
Zhexiong Liu, and Tianwen Li
13. The Use of AWE in Non-English Majors: Student Responses to Automated
Feedback and the Impact of Feedback Accuracy
Aysel Saricaoglu and Zeynep Bilki
14. Relationships Between Middle-School Teachers' Perceptions and
Application of Automated Writing Evaluation and Student Performance
Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero,
Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles, and Saimou
Zhang
15. Automated Writing Trait Analysis
Paul Deane
16. Advances in Automating Feedback for Argumentative Writing: Feedback
Prize as a Case Study
Perpetual Baffour and Scott Crossley
17. Automated Feedback in Formative Assessment
Harry A. Layman
Section 4: Factors Affecting the Performance of Automated Evaluation
18. Using Automated Scoring to Support Rating Quality Analyses for Human
Raters
Stefanie A. Wind
19. Calibrating and Evaluating Automated Scoring Engines and Human Raters
over Time Using Measurement Models
Stefanie A. Wind and Yangmeng Xu
20. AI Scoring and Writing Fairness
Mark D. Shermis
21. Automating Bias in Writing Evaluation: Sources, Barriers, and
Recommendations
Maria Goldshtein, Amin G. Alhashim, and Rod D. Roscoe
22. Explainable AI and AWE: Balancing Tensions between Transparency and
Predictive Accuracy
David Boulanger and Vivekanandan Suresh Kumar
23. Validity Argument Roadmap for Automated Scoring
David Dorsey, Hillary Michaels, and Steve Ferrara
Section 5: Technological Innovations: "Where Do We Go From Here?"
24. Redesigning Automated Scoring Engines to Include Deep Learning Models
Sue Lottridge, Chris Ormerod, and Milan Patel
25. Automated Short-Response Scoring for Automated Item Generation in
Science Assessments
Jinnie Shin and Mark J. Gierl
26. Latent Dirichlet Allocation of Constructed Responses
Jordan M. Wheeler, Shiyu Wang, and Allan S. Cohen
27. Computational Language as a Window into Cognitive Functioning
Peter W. Foltz and Chelsea Chandler
28. Expanding AWE to Incorporate Reading and Writing Evaluation
Laura K. Allen, Püren Öncel, and Lauren E. Flynn
29. The Two U's in the Future of Automated Essay Evaluation: Universal
Access and User-Centered Design
Danielle S. McNamara and Andrew Potter
Jill Burstein
Section 1: Introduction to AEE and Modern AEE Systems
1. Introduction to Automated Evaluation
Mark D. Shermis and Joshua Wilson
2. Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand
Scoring in the Summative Assessment Program
Corey Palermo and Arianto Wibowo
3. Exploration of the Stacking Ensemble Learning Algorithm for Automated
Scoring of Constructed-Response Items in Reading Assessment
Hong Jiao, Shuangshuang Xu, and Manqian Liao
4. Scoring Essays Written in Persian Using a Transformer-Based Model:
Implications for Multilingual AES
Tahereh Firoozi and Mark J. Gierl
5. SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese
Foreign Language Learners
Tao-Hsing Chang and Yao-Ting Sung
6. NLP Application in the Hebrew Language for Assessment and Learning
Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Tzur
Karelitiz, and Effi Levi
Section 2: Expanding Automated Evaluation: Reading, Speech, Mathematics,
and Writing Research
7. Automated Scoring for NAEP Short-Form Constructed Responses in Reading
Mark D. Shermis
8. Automated Scoring and Feedback for Spoken Language
Klaus Zechner and Ching-Ni Hsieh
9. Automated Scoring of Math Constructed-Response Items
Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy, and Lee
Becker
10. We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale
Writing Research Projects
Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus
Hruska, and Pablo Pirnay-Dummer
Section 3: Innovations in Automated Writing Evaluation
11. Exploring the Role of Automated Writing Evaluation as a Formative
Assessment Tool Supporting Self-Regulated Learning in Writing
Joshua Wilson and Charles MacArthur
12. Supporting Students' Text-Based Evidence Use via Formative Automated
Writing and Revision Assessment
Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman,
Zhexiong Liu, and Tianwen Li
13. The Use of AWE in Non-English Majors: Student Responses to Automated
Feedback and the Impact of Feedback Accuracy
Aysel Saricaoglu and Zeynep Bilki
14. Relationships Between Middle-School Teachers' Perceptions and
Application of Automated Writing Evaluation and Student Performance
Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero,
Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles, and Saimou
Zhang
15. Automated Writing Trait Analysis
Paul Deane
16. Advances in Automating Feedback for Argumentative Writing: Feedback
Prize as a Case Study
Perpetual Baffour and Scott Crossley
17. Automated Feedback in Formative Assessment
Harry A. Layman
Section 4: Factors Affecting the Performance of Automated Evaluation
18. Using Automated Scoring to Support Rating Quality Analyses for Human
Raters
Stefanie A. Wind
19. Calibrating and Evaluating Automated Scoring Engines and Human Raters
over Time Using Measurement Models
Stefanie A. Wind and Yangmeng Xu
20. AI Scoring and Writing Fairness
Mark D. Shermis
21. Automating Bias in Writing Evaluation: Sources, Barriers, and
Recommendations
Maria Goldshtein, Amin G. Alhashim, and Rod D. Roscoe
22. Explainable AI and AWE: Balancing Tensions between Transparency and
Predictive Accuracy
David Boulanger and Vivekanandan Suresh Kumar
23. Validity Argument Roadmap for Automated Scoring
David Dorsey, Hillary Michaels, and Steve Ferrara
Section 5: Technological Innovations: "Where Do We Go From Here?"
24. Redesigning Automated Scoring Engines to Include Deep Learning Models
Sue Lottridge, Chris Ormerod, and Milan Patel
25. Automated Short-Response Scoring for Automated Item Generation in
Science Assessments
Jinnie Shin and Mark J. Gierl
26. Latent Dirichlet Allocation of Constructed Responses
Jordan M. Wheeler, Shiyu Wang, and Allan S. Cohen
27. Computational Language as a Window into Cognitive Functioning
Peter W. Foltz and Chelsea Chandler
28. Expanding AWE to Incorporate Reading and Writing Evaluation
Laura K. Allen, Püren Öncel, and Lauren E. Flynn
29. The Two U's in the Future of Automated Essay Evaluation: Universal
Access and User-Centered Design
Danielle S. McNamara and Andrew Potter