This book provides a coherent description of foundational matters concerning statistical inference and shows how statistics can help us make inductive inferences about a broader context, based only on a limited dataset such as a random sample drawn from a larger population. By relating those basics to the methodological debate about inferential errors associated with p-values and statistical significance testing, readers are provided with a clear grasp of what statistical inference presupposes, and what it can and cannot do. To facilitate intuition, the representations throughout the book are as non-technical as possible.
The central inspiration behind the text comes from the scientific debate about good statistical practices and the replication crisis. Calls for statistical reform include an unprecedented methodological warning from the American Statistical Association in 2016, a special issue "Statistical Inference in the 21st Century:A World Beyond p < 0.05" of The AmericanStatistician in 2019, and a widely supported call to "Retire statistical significance" in Nature in 2019.
The book elucidates the probabilistic foundations and the potential of sample-based inferences, including random data generation, effect size estimation, and the assessment of estimation uncertainty caused by random error. Based on a thorough understanding of those basics, it then describes the p-value concept and the null-hypothesis-significance-testing ritual, and finally points out the ensuing inferential errors. This provides readers with the competence to avoid ill-guided statistical routines and misinterpretations of statistical quantities in the future.
Intended for readers with an interest in understanding the role of statistical inference, the book provides a prudent assessment of the knowledge gain that can be obtained from a particular setof data under consideration of the uncertainty caused by random error. More particularly, it offers an accessible resourcefor graduate students as well as statistical practitioners who have a basic knowledge of statistics. Last but not least, it is aimed at scientists with a genuine methodological interest in the above-mentioned reform debate.
The central inspiration behind the text comes from the scientific debate about good statistical practices and the replication crisis. Calls for statistical reform include an unprecedented methodological warning from the American Statistical Association in 2016, a special issue "Statistical Inference in the 21st Century:A World Beyond p < 0.05" of The AmericanStatistician in 2019, and a widely supported call to "Retire statistical significance" in Nature in 2019.
The book elucidates the probabilistic foundations and the potential of sample-based inferences, including random data generation, effect size estimation, and the assessment of estimation uncertainty caused by random error. Based on a thorough understanding of those basics, it then describes the p-value concept and the null-hypothesis-significance-testing ritual, and finally points out the ensuing inferential errors. This provides readers with the competence to avoid ill-guided statistical routines and misinterpretations of statistical quantities in the future.
Intended for readers with an interest in understanding the role of statistical inference, the book provides a prudent assessment of the knowledge gain that can be obtained from a particular setof data under consideration of the uncertainty caused by random error. More particularly, it offers an accessible resourcefor graduate students as well as statistical practitioners who have a basic knowledge of statistics. Last but not least, it is aimed at scientists with a genuine methodological interest in the above-mentioned reform debate.