Since the early days of performance assessment, human ratings have been subject to various forms of error and bias. Expert raters often come up with different ratings for the very same performance and it seems that assessment outcomes largely depend upon which raters happen to assign the rating. This book provides an introduction to many-facet Rasch measurement (MFRM), a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of fallible human ratings. Revised and updated throughout, the Second Edition includes a stronger focus on the Facets computer program, emphasizing the pivotal role that MFRM plays for validating the interpretations and uses of assessment outcomes.
«[...] this book covers a diverse array of both basic and important topics related to Rasch measurement, providing insights into complexities and intricacies involved in rater-mediated performance assessment. This books is highly recommended to researchers, practitioners, and other stakeholders who are interested in Rasch measurement, but only have a limited understanding of Rasch modeling. The book is also suitable for advanced learners and applied researchers who could consult advanced modeling techniques discussed in Chapters 8 and 9.»
(Chao Han, Measurement: Interdisciplinary Research and Perspectives, 2019, Vol. 17, N° 2)
«The strengths of this book are numerous. Discussing the MFRM approach in the context of language testing can make the concept of facets (raters, students, etc.) and their associated issues (e.g., rater variability) appealing to educators. The book discusses the limitations of standard approaches in reducing between-rater variation and shows how the MFRM approach can be used to regulate error proneness of human ratings that is inherent in such ratings. The author's broad perspective on psychometric argument of fair scores can be used with an eye toward equity in human ratings of students' performance. The quality of the examples in the book is impressive. It provides an understanding of the potential in using the MFRM approach in the broader fields of education, human health sciences, and many other fields»
(Daeryong Seo & Husein Taherbhai, Applied Psychological Measurement, 2013, 37(2))
(Chao Han, Measurement: Interdisciplinary Research and Perspectives, 2019, Vol. 17, N° 2)
«The strengths of this book are numerous. Discussing the MFRM approach in the context of language testing can make the concept of facets (raters, students, etc.) and their associated issues (e.g., rater variability) appealing to educators. The book discusses the limitations of standard approaches in reducing between-rater variation and shows how the MFRM approach can be used to regulate error proneness of human ratings that is inherent in such ratings. The author's broad perspective on psychometric argument of fair scores can be used with an eye toward equity in human ratings of students' performance. The quality of the examples in the book is impressive. It provides an understanding of the potential in using the MFRM approach in the broader fields of education, human health sciences, and many other fields»
(Daeryong Seo & Husein Taherbhai, Applied Psychological Measurement, 2013, 37(2))