Theory of Statistical Inference is designed as a reference on statistical inference for researchers and students at the graduate or advanced undergraduate level. It presents a unified treatment of the foundational ideas of modern statistical inference, and would be suitable for a core course in a graduate program in statistics or biostatistics. The emphasis is on the application of mathematical theory to the problem of inference, leading to an optimization theory allowing the choice of those statistical methods yielding the most efficient use of data. The book shows how a small number of key concepts, such as sufficiency, invariance, stochastic ordering, decision theory and vector space algebra play a recurring and unifying role.
The volume can be divided into four sections. Part I provides a review of the required distribution theory. Part II introduces the problem of statistical inference. This includes the definitions of the exponential family, invariant and Bayesian models. Basic concepts of estimation, confidence intervals and hypothesis testing are introduced here. Part III constitutes the core of the volume, presenting a formal theory of statistical inference. Beginning with decision theory, this section then covers uniformly minimum variance unbiased (UMVU) estimation, minimum risk equivariant (MRE) estimation and the Neyman-Pearson test. Finally, Part IV introduces large sample theory. This section begins with stochastic limit theorems, the d-method, the Bahadur representation theorem for sample quantiles, large sample U-estimation, the Cramér-Rao lower bound and asymptotic efficiency. A separate chapter is then devoted to estimating equation methods. The volume ends with a detailed development of large sample hypothesis testing, based on the likelihood ratio test (LRT), Rao score test and the Wald test.
Features
This volume includes treatment of linear and nonlinear regression models, ANOVA models, generalized linear models (GLM) and generalized estimating equations (GEE).
An introduction to decision theory (including risk, admissibility, classification, Bayes and minimax decision rules) is presented. The importance of this sometimes overlooked topic to statistical methodology is emphasized.
The volume emphasizes throughout the important role that can be played by group theory and invariance in statistical inference.
Nonparametric (rank-based) methods are derived by the same principles used for parametric models and are therefore presented as solutions to well-defined mathematical problems, rather than as robust heuristic alternatives to parametric methods.
Each chapter ends with a set of theoretical and applied exercises integrated with the main text. Problems involving R programming are included.
Appendices summarize the necessary background in analysis, matrix algebra and group theory.
The volume can be divided into four sections. Part I provides a review of the required distribution theory. Part II introduces the problem of statistical inference. This includes the definitions of the exponential family, invariant and Bayesian models. Basic concepts of estimation, confidence intervals and hypothesis testing are introduced here. Part III constitutes the core of the volume, presenting a formal theory of statistical inference. Beginning with decision theory, this section then covers uniformly minimum variance unbiased (UMVU) estimation, minimum risk equivariant (MRE) estimation and the Neyman-Pearson test. Finally, Part IV introduces large sample theory. This section begins with stochastic limit theorems, the d-method, the Bahadur representation theorem for sample quantiles, large sample U-estimation, the Cramér-Rao lower bound and asymptotic efficiency. A separate chapter is then devoted to estimating equation methods. The volume ends with a detailed development of large sample hypothesis testing, based on the likelihood ratio test (LRT), Rao score test and the Wald test.
Features
This volume includes treatment of linear and nonlinear regression models, ANOVA models, generalized linear models (GLM) and generalized estimating equations (GEE).
An introduction to decision theory (including risk, admissibility, classification, Bayes and minimax decision rules) is presented. The importance of this sometimes overlooked topic to statistical methodology is emphasized.
The volume emphasizes throughout the important role that can be played by group theory and invariance in statistical inference.
Nonparametric (rank-based) methods are derived by the same principles used for parametric models and are therefore presented as solutions to well-defined mathematical problems, rather than as robust heuristic alternatives to parametric methods.
Each chapter ends with a set of theoretical and applied exercises integrated with the main text. Problems involving R programming are included.
Appendices summarize the necessary background in analysis, matrix algebra and group theory.
"The large variety of topics covered in the current version allows students at all level, be it bachelor, master or Ph.D. to find topics suitable for their own study. On one hand, the book contains fairly advanced topics to help researchers in their own research areas, and on the other hand, it provides enough materials for lecturers to prepare lecture notes and design an entire course on theoretical statistics. Overall, I am extremely excited about Theory of Statistical Inference and eagerly waiting for numerous students, researchers, lecturers and even working professionals to enjoy the immense benefit coming out of its publication."
Somabha Mukherjee, National University of Singapore, Singapore, Journal of the American Statistical Association, February 2024.
Somabha Mukherjee, National University of Singapore, Singapore, Journal of the American Statistical Association, February 2024.