This book develops methods for two key problems in the analysis of large-scale surveys: dealing with incomplete data and making inferences about sparsely represented subdomains. The presentation is committed to two particular methods, multiple imputation for missing data and multivariate composition for small-area estimation. The methods are presented as developments of established approaches by attending to their deficiencies. Thus the change to more efficient methods can be gradual, sensitive to the management priorities in large research organisations and multidisciplinary teams and to other reasons for inertia. The typical setting of each problem is addressed first, and then the constituency of the applications is widened to reinforce the view that the general method is essential for modern survey analysis. The general tone of the book is not "from theory to practice," but "from current practice to better practice." The third part of the book, a single chapter, presents a methodfor efficient estimation under model uncertainty. It is inspired by the solution for small-area estimation and is an example of "from good practice to better theory."
A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work in the future.
While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as someone else's failure that bars effective and valid analysis, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics.
Overriding in this approach is the general principle-to do the best, for the consumer of statistical information, that can be done with what is available. The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely.
After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000-02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He isa member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995).
A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work in the future.
While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as someone else's failure that bars effective and valid analysis, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics.
Overriding in this approach is the general principle-to do the best, for the consumer of statistical information, that can be done with what is available. The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely.
After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000-02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He isa member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995).
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
From the reviews: "...Longford offers a lucid account of these challenges in the context of sample surveys and provides potential solutions. ...Ultimately, this book serves as an excellent reference source to guide and improve statistical practice in survey settings exhibiting these problems." Psychometrika, Vol. 72, No. 1, March 2007 "I am convinced this book will be useful to practitioners...[and a] valuable resource for future research in this field." Jan Kordos in Statistics in Transition, Vol. 7, No. 5, June 2006 "To sum up, I think this is an excellent book and it thoroughly covers methods to deal with incomplete data problems and small-area estimation. It is a useful and suitable book for survey statisticians, as well as for researchers and graduate students interested on sampling designs." Ramon Cleries Soler in Statistics and Operations Research Transactions, Vol. 30, No. 1, January-June 2006 "This book develops methods for two key problems in the analysis of large-scale surveys dealing with incomplete data and making inferences about sparsely represented subdomains. ... A strength of the presentation is the chapters on case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to theoretical arguments. The book is suitable for graduate students and researchers ... ." (T. Postelnicu, Zentralblatt MATH, Vol. 1092 (18), 2006) "The book contains an array of topics related to missing data, small-area estimation and combining estimators. The target audiences are graduate students and researchers ... . The main strength of this book is the presentation of case studies and each chapter offers a reasonable number of exercises. Overall, it is well written book, which make a pleasant reading, indeed and I recommend it survey statisticians and those readers interested in the field of missing data and small area estimation." (S. E. Ahmed, Technometrics, Vol. 49 (3), August, 2007) "...[T]this is an excellent book and it thoroughly covers methods to deal with incomplete data problems and small-area estimation. It is a useful and suitable book for survey statisticians, as well as for researchers and graduate students interested in sampling designs." (Ramon Cleiries Soler, SORT, Journal of the Catalon Statistical Institute, June 2006)