Murray Aitkin (Australia University of Melbourne)
Introduction to Statistical Modelling and Inference
Murray Aitkin (Australia University of Melbourne)
Introduction to Statistical Modelling and Inference
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The book is based on the model-based theory, used widely by scientists in many fields. It covers simple experimental and survey designs, and probability models up to and including generalised linear (regression) models and some extensions of these, including finite mixtures.
Andere Kunden interessierten sich auch für
- Luca ScruccaModel-Based Clustering, Classification, and Density Estimation Using mclust in R82,99 €
- John Haigh (University of Sussex Reader in Statistics)Probability12,99 €
- Gary L RosnerBayesian Thinking in Biostatistics97,99 €
- Sujit Sahu (University of Southhampton)Bayesian Modeling of Spatio-Temporal Data with R68,99 €
- Malte GrosserAdvanced R Solutions85,99 €
- Linda J. S. Allen (Texas Tech University, Lubbock, Texas, USA)An Introduction to Stochastic Processes with Applications to Biology146,99 €
- Simona Cocco (Director of Research, Director of Research, CNRS, EcoFrom Statistical Physics to Data-Driven Modelling81,99 €
-
-
-
The book is based on the model-based theory, used widely by scientists in many fields. It covers simple experimental and survey designs, and probability models up to and including generalised linear (regression) models and some extensions of these, including finite mixtures.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis Ltd
- Seitenzahl: 374
- Erscheinungstermin: 30. September 2022
- Englisch
- Abmessung: 260mm x 179mm x 26mm
- Gewicht: 944g
- ISBN-13: 9781032105710
- ISBN-10: 1032105712
- Artikelnr.: 64104480
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- Verlag: Taylor & Francis Ltd
- Seitenzahl: 374
- Erscheinungstermin: 30. September 2022
- Englisch
- Abmessung: 260mm x 179mm x 26mm
- Gewicht: 944g
- ISBN-13: 9781032105710
- ISBN-10: 1032105712
- Artikelnr.: 64104480
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
Murray Aitkin earned his BSc, PhD, and DSc from Sydney University in Mathematical Statistics. Dr Aitkin completed his post-doctoral work at the Psychometric Laboratory, University Of North Carolina, Chapel Hill. He has held Teaching/lecturing positions at Virginia Polytechnic Institute, the University of New South Wales, and Macquarie University along with research professor positions at Lancaster University (3 years, UK Social Science Research Council) and the University of Western Australia (5 years, Australian Research Council). He has been a Professor of Statistics at Lancaster University, Tel Aviv University and the University of Newcastle UK. He has been a visiting researcher and also held consulting positions at the Educational Testing Service (Fulbright Senior Fellow 1971-2 and Senior Statistician 1988-89). He was the Chief Statistician 2000 - 2002 at the Education Statistics Services Institute, American Institutes for Research, Washington DC and advisor to the National Center for Education Statistics, US Department of Education. He is a Fellow of American Statistical Association; Elected Member at International Statistical Institute, and a Honorary member of Statistical Modelling Society. He is a Honorary Professorial Associate at the University of Melbourne: Department of Psychology 2004 - 2008, Department (now School) of Mathematics and Statistics 2008 - current.
Preface. 1.1. What is Statistical Modelling? 1.2. What is Statistical
Analysis? 1.3. What is Statistical Inference? 1.4. Why this book? 1.5. Why
the focus on the Bayesian approach? 1.6. Coverage of this book. 1.7. Recent
changes in technology. 1.8. Aims of the course. 2. What is (or are) Big
Data? 3. Data and research studies. 3.1. Lifetimes of radio transceivers.
3.2. Clustering of V1 missile hits in South London. 3.3. Court case on
vaccination risk. 3.4. Clinical trial of Depepsen for the treatment of
duodenal ulcers. 3.5. Effectiveness of treatments for respiratory distress
in newborn babies. 3.6. Vitamin K. 3.7. Species counts. 3.8. Toxicology in
small animal experiments. 3.9. Incidence of Down's syndrome in four
regions. 3.10. Fish species in lakes. 3.11. Absence from school. 3.12.
Hostility in husbands of suicide attempters. 3.13. Tolerance of racial
intermarriage. 3.14. Hospital bed use. 3.15. Dugong growth. 3.16. Simulated
motorcycle collision. 3.17. Global warming. 3.18. Social group membership.
4. The StatLab data base. 4.1. Types of variables. 4.2. StatLab population
questions. 5. Sample surveys - should we believe what we read? 5.1. Women
and Love. 5.2. Would you have children? 5.3. Representative sampling. 5.4.
Bias in the Newsday sample. 5.5. Bias in the Women and Love sample. 6.
Probability. 6.1. Relative frequency. 6.2. Degree of belief. 6.3. StatLab
dice sampling. 6.4. Computer sampling. 6.5. Probability for sampling. 6.6.
Probability axioms. 6.7. Screening tests and Bayes's theorem. 6.8. The
misuse of probability in the Sally Clark case. 6.9. Random variables and
their probability distributions. 6.10. Sums of independent random
variables. 7. Statistical inference I - discrete distributions. 7.1.
Evidence-based policy. 7.2. The basis of statistical inference. 7.3. The
survey sampling approach. 7.4. Model-based inference theories. 7.5. The
likelihood function. 7.6. Binomial distribution. 7.7. Frequentist theory.
7.8. Bayesian theory. 7.9. Inferences from posterior sampling. 7.10. Sample
design. 7.11. Parameter transformations. 7.12. The Poisson distribution.
7.13. Categorical variables.7.14. Maximum likelihood. 7.15. Bayesian
analysis. 8. Comparison of binomials: the Randomised Clinical Trial. 8.1.
Definition. 8.2. Example - RCT of Depepsen for the treatment of duodenal
ulcers. 8.3. Monte Carlo simulation. 8.4. RCT continued. 8.5. Bayesian
hypothesis testing/model comparison. 8.6. Other measures of treatment
difference. 8.7. The ECMO trials. 9. Data visualisation. 9.1. The
histogram. 9.2. The empirical mass and cumulative distribution functions.
9.3. Probability models for continuous variables. 10. Statistical Inference
II - the continuous exponential, Gaussian and uniform distributions. 10.1.
The exponential distribution. 10.2. The exponential likelihood. 10.3.
Frequentist theory. 10.4. Bayesian theory. 10.5. The Gaussian distribution.
10.6. The Gaussian likelihood function. 10.7. Frequentist inference. 10.8.
Bayesian inference. 10.9. Hypothesis testing. 10.10. Frequentist hypothesis
testing. 10.11. Bayesian hypothesis testing. 10.12. Pivotal functions.
10.13. Conjugate priors. 10.14. The uniform distribution. 11. Statistical
Inference III - two-parameter continuous distributions. 11.1. The Gaussian
distribution. 11.2. Frequentist analysis. 11.3. Bayesian analysis. 11.4.
The lognormal distribution. 11.5. The Weibull distribution. 11.6. The gamma
distribution. 11.7. The gamma likelihood. 12. Model assessment. 12.1.
Gaussian model assessment. 12.2. Lognormal model assessment. 12.3.
Exponential model assessment. 12.4. Weibull model assessment. 12.5. Gamma
model assessment. 13. The multinomial distribution. 13.1. The multinomial
likelihood. 13.2. Frequentist analysis. 13.3. Bayesian analysis. 13.4.
Criticisms of the Haldane prior. 13.5. Inference for multinomial quantiles.
13.6. Dirichlet posterior weighting. 13.7. The frequentist bootstrap. 13.8.
Stratified sampling and weighting. 14. Model comparison and model
averaging. 14.4. The deviance. 14.5. Asymptotic distribution of the
deviance. 14.6. Nested models. 14.7. Model choice and model averaging. 15.
Gaussian linear regression models. 15.1. Simple linear regression. 15.2.
Model assessment through residual examination. 15.3. Likelihood for the
simple linear regression model. 15.4. Maximum likelihood. 15.5. Bayesian
and frequentist inferences. 15.6. Model-robust analysis. 15.7. Correlation
and prediction. 15.8. Probability model assessment. 15.9. "Dummy variable"
regression. 15.10. Two-variable models. 15.11. Model assumptions. 15.12.
The p-variable linear model. 15.13. The Gaussian multiple regression
likelihood. 15.14. Interactions. 15.15. Ridge regression, the Lasso and the
"elastic net". 15.16. Modelling boy birthweights. 15.17. Modelling girl
intelligence at age 10 and family income 15.18. Modelling of the hostility
data. 15.19. Principal component regression. 16. Incomplete data and their
analysis with the EM and DA algorithms. 16.1. The general incomplete data
model. 16.2. The EM algorithm. 16.3. Missingness. 16.4. Lost data. 16.5.
Censoring in the exponential distribution. 16.6. Randomly missing Gaussian
observations. 16.7. Missing responses and/or covariates in simple and
multiple regression. 16.8. Mixture distributions. 16.9. Bayesian analysis
and the Data Augmentation algorithm. 17. Generalised linear models (GLMs).
17.1. The exponential family. 17.2. Maximum likelihood 17.3 The GLM
algorithm. 17.4. Bayesian package development. 17.5. Bayesian analysis from
ML. 17.6. Binary response models. 17.7. The menarche data. 17.8. Poisson
regression - fish species frequency. 17.9. Gamma regression. 18. Extensions
of GLMs. 18.1. Double GLMs. 18.2. Maximum likelihood. 18.3. Bayesian
analysis. 18.4. Segmented or broken-stick regressions. 18.5. Heterogeneous
regressions. 18.6. Highly non-linear functions. 18.7. Neural networks.
18.8. Social networks and social group membership. 18.9. The motorcycle
data. 19. Appendix 1 - length-biased sampling. 20. Appendix 2 -
Two-component Gaussian mixture. 21. Appendix 3 - StatLab Variables. 22.
Appendix 4 - a short history of statistics from 1890.
Analysis? 1.3. What is Statistical Inference? 1.4. Why this book? 1.5. Why
the focus on the Bayesian approach? 1.6. Coverage of this book. 1.7. Recent
changes in technology. 1.8. Aims of the course. 2. What is (or are) Big
Data? 3. Data and research studies. 3.1. Lifetimes of radio transceivers.
3.2. Clustering of V1 missile hits in South London. 3.3. Court case on
vaccination risk. 3.4. Clinical trial of Depepsen for the treatment of
duodenal ulcers. 3.5. Effectiveness of treatments for respiratory distress
in newborn babies. 3.6. Vitamin K. 3.7. Species counts. 3.8. Toxicology in
small animal experiments. 3.9. Incidence of Down's syndrome in four
regions. 3.10. Fish species in lakes. 3.11. Absence from school. 3.12.
Hostility in husbands of suicide attempters. 3.13. Tolerance of racial
intermarriage. 3.14. Hospital bed use. 3.15. Dugong growth. 3.16. Simulated
motorcycle collision. 3.17. Global warming. 3.18. Social group membership.
4. The StatLab data base. 4.1. Types of variables. 4.2. StatLab population
questions. 5. Sample surveys - should we believe what we read? 5.1. Women
and Love. 5.2. Would you have children? 5.3. Representative sampling. 5.4.
Bias in the Newsday sample. 5.5. Bias in the Women and Love sample. 6.
Probability. 6.1. Relative frequency. 6.2. Degree of belief. 6.3. StatLab
dice sampling. 6.4. Computer sampling. 6.5. Probability for sampling. 6.6.
Probability axioms. 6.7. Screening tests and Bayes's theorem. 6.8. The
misuse of probability in the Sally Clark case. 6.9. Random variables and
their probability distributions. 6.10. Sums of independent random
variables. 7. Statistical inference I - discrete distributions. 7.1.
Evidence-based policy. 7.2. The basis of statistical inference. 7.3. The
survey sampling approach. 7.4. Model-based inference theories. 7.5. The
likelihood function. 7.6. Binomial distribution. 7.7. Frequentist theory.
7.8. Bayesian theory. 7.9. Inferences from posterior sampling. 7.10. Sample
design. 7.11. Parameter transformations. 7.12. The Poisson distribution.
7.13. Categorical variables.7.14. Maximum likelihood. 7.15. Bayesian
analysis. 8. Comparison of binomials: the Randomised Clinical Trial. 8.1.
Definition. 8.2. Example - RCT of Depepsen for the treatment of duodenal
ulcers. 8.3. Monte Carlo simulation. 8.4. RCT continued. 8.5. Bayesian
hypothesis testing/model comparison. 8.6. Other measures of treatment
difference. 8.7. The ECMO trials. 9. Data visualisation. 9.1. The
histogram. 9.2. The empirical mass and cumulative distribution functions.
9.3. Probability models for continuous variables. 10. Statistical Inference
II - the continuous exponential, Gaussian and uniform distributions. 10.1.
The exponential distribution. 10.2. The exponential likelihood. 10.3.
Frequentist theory. 10.4. Bayesian theory. 10.5. The Gaussian distribution.
10.6. The Gaussian likelihood function. 10.7. Frequentist inference. 10.8.
Bayesian inference. 10.9. Hypothesis testing. 10.10. Frequentist hypothesis
testing. 10.11. Bayesian hypothesis testing. 10.12. Pivotal functions.
10.13. Conjugate priors. 10.14. The uniform distribution. 11. Statistical
Inference III - two-parameter continuous distributions. 11.1. The Gaussian
distribution. 11.2. Frequentist analysis. 11.3. Bayesian analysis. 11.4.
The lognormal distribution. 11.5. The Weibull distribution. 11.6. The gamma
distribution. 11.7. The gamma likelihood. 12. Model assessment. 12.1.
Gaussian model assessment. 12.2. Lognormal model assessment. 12.3.
Exponential model assessment. 12.4. Weibull model assessment. 12.5. Gamma
model assessment. 13. The multinomial distribution. 13.1. The multinomial
likelihood. 13.2. Frequentist analysis. 13.3. Bayesian analysis. 13.4.
Criticisms of the Haldane prior. 13.5. Inference for multinomial quantiles.
13.6. Dirichlet posterior weighting. 13.7. The frequentist bootstrap. 13.8.
Stratified sampling and weighting. 14. Model comparison and model
averaging. 14.4. The deviance. 14.5. Asymptotic distribution of the
deviance. 14.6. Nested models. 14.7. Model choice and model averaging. 15.
Gaussian linear regression models. 15.1. Simple linear regression. 15.2.
Model assessment through residual examination. 15.3. Likelihood for the
simple linear regression model. 15.4. Maximum likelihood. 15.5. Bayesian
and frequentist inferences. 15.6. Model-robust analysis. 15.7. Correlation
and prediction. 15.8. Probability model assessment. 15.9. "Dummy variable"
regression. 15.10. Two-variable models. 15.11. Model assumptions. 15.12.
The p-variable linear model. 15.13. The Gaussian multiple regression
likelihood. 15.14. Interactions. 15.15. Ridge regression, the Lasso and the
"elastic net". 15.16. Modelling boy birthweights. 15.17. Modelling girl
intelligence at age 10 and family income 15.18. Modelling of the hostility
data. 15.19. Principal component regression. 16. Incomplete data and their
analysis with the EM and DA algorithms. 16.1. The general incomplete data
model. 16.2. The EM algorithm. 16.3. Missingness. 16.4. Lost data. 16.5.
Censoring in the exponential distribution. 16.6. Randomly missing Gaussian
observations. 16.7. Missing responses and/or covariates in simple and
multiple regression. 16.8. Mixture distributions. 16.9. Bayesian analysis
and the Data Augmentation algorithm. 17. Generalised linear models (GLMs).
17.1. The exponential family. 17.2. Maximum likelihood 17.3 The GLM
algorithm. 17.4. Bayesian package development. 17.5. Bayesian analysis from
ML. 17.6. Binary response models. 17.7. The menarche data. 17.8. Poisson
regression - fish species frequency. 17.9. Gamma regression. 18. Extensions
of GLMs. 18.1. Double GLMs. 18.2. Maximum likelihood. 18.3. Bayesian
analysis. 18.4. Segmented or broken-stick regressions. 18.5. Heterogeneous
regressions. 18.6. Highly non-linear functions. 18.7. Neural networks.
18.8. Social networks and social group membership. 18.9. The motorcycle
data. 19. Appendix 1 - length-biased sampling. 20. Appendix 2 -
Two-component Gaussian mixture. 21. Appendix 3 - StatLab Variables. 22.
Appendix 4 - a short history of statistics from 1890.
Preface. 1.1. What is Statistical Modelling? 1.2. What is Statistical
Analysis? 1.3. What is Statistical Inference? 1.4. Why this book? 1.5. Why
the focus on the Bayesian approach? 1.6. Coverage of this book. 1.7. Recent
changes in technology. 1.8. Aims of the course. 2. What is (or are) Big
Data? 3. Data and research studies. 3.1. Lifetimes of radio transceivers.
3.2. Clustering of V1 missile hits in South London. 3.3. Court case on
vaccination risk. 3.4. Clinical trial of Depepsen for the treatment of
duodenal ulcers. 3.5. Effectiveness of treatments for respiratory distress
in newborn babies. 3.6. Vitamin K. 3.7. Species counts. 3.8. Toxicology in
small animal experiments. 3.9. Incidence of Down's syndrome in four
regions. 3.10. Fish species in lakes. 3.11. Absence from school. 3.12.
Hostility in husbands of suicide attempters. 3.13. Tolerance of racial
intermarriage. 3.14. Hospital bed use. 3.15. Dugong growth. 3.16. Simulated
motorcycle collision. 3.17. Global warming. 3.18. Social group membership.
4. The StatLab data base. 4.1. Types of variables. 4.2. StatLab population
questions. 5. Sample surveys - should we believe what we read? 5.1. Women
and Love. 5.2. Would you have children? 5.3. Representative sampling. 5.4.
Bias in the Newsday sample. 5.5. Bias in the Women and Love sample. 6.
Probability. 6.1. Relative frequency. 6.2. Degree of belief. 6.3. StatLab
dice sampling. 6.4. Computer sampling. 6.5. Probability for sampling. 6.6.
Probability axioms. 6.7. Screening tests and Bayes's theorem. 6.8. The
misuse of probability in the Sally Clark case. 6.9. Random variables and
their probability distributions. 6.10. Sums of independent random
variables. 7. Statistical inference I - discrete distributions. 7.1.
Evidence-based policy. 7.2. The basis of statistical inference. 7.3. The
survey sampling approach. 7.4. Model-based inference theories. 7.5. The
likelihood function. 7.6. Binomial distribution. 7.7. Frequentist theory.
7.8. Bayesian theory. 7.9. Inferences from posterior sampling. 7.10. Sample
design. 7.11. Parameter transformations. 7.12. The Poisson distribution.
7.13. Categorical variables.7.14. Maximum likelihood. 7.15. Bayesian
analysis. 8. Comparison of binomials: the Randomised Clinical Trial. 8.1.
Definition. 8.2. Example - RCT of Depepsen for the treatment of duodenal
ulcers. 8.3. Monte Carlo simulation. 8.4. RCT continued. 8.5. Bayesian
hypothesis testing/model comparison. 8.6. Other measures of treatment
difference. 8.7. The ECMO trials. 9. Data visualisation. 9.1. The
histogram. 9.2. The empirical mass and cumulative distribution functions.
9.3. Probability models for continuous variables. 10. Statistical Inference
II - the continuous exponential, Gaussian and uniform distributions. 10.1.
The exponential distribution. 10.2. The exponential likelihood. 10.3.
Frequentist theory. 10.4. Bayesian theory. 10.5. The Gaussian distribution.
10.6. The Gaussian likelihood function. 10.7. Frequentist inference. 10.8.
Bayesian inference. 10.9. Hypothesis testing. 10.10. Frequentist hypothesis
testing. 10.11. Bayesian hypothesis testing. 10.12. Pivotal functions.
10.13. Conjugate priors. 10.14. The uniform distribution. 11. Statistical
Inference III - two-parameter continuous distributions. 11.1. The Gaussian
distribution. 11.2. Frequentist analysis. 11.3. Bayesian analysis. 11.4.
The lognormal distribution. 11.5. The Weibull distribution. 11.6. The gamma
distribution. 11.7. The gamma likelihood. 12. Model assessment. 12.1.
Gaussian model assessment. 12.2. Lognormal model assessment. 12.3.
Exponential model assessment. 12.4. Weibull model assessment. 12.5. Gamma
model assessment. 13. The multinomial distribution. 13.1. The multinomial
likelihood. 13.2. Frequentist analysis. 13.3. Bayesian analysis. 13.4.
Criticisms of the Haldane prior. 13.5. Inference for multinomial quantiles.
13.6. Dirichlet posterior weighting. 13.7. The frequentist bootstrap. 13.8.
Stratified sampling and weighting. 14. Model comparison and model
averaging. 14.4. The deviance. 14.5. Asymptotic distribution of the
deviance. 14.6. Nested models. 14.7. Model choice and model averaging. 15.
Gaussian linear regression models. 15.1. Simple linear regression. 15.2.
Model assessment through residual examination. 15.3. Likelihood for the
simple linear regression model. 15.4. Maximum likelihood. 15.5. Bayesian
and frequentist inferences. 15.6. Model-robust analysis. 15.7. Correlation
and prediction. 15.8. Probability model assessment. 15.9. "Dummy variable"
regression. 15.10. Two-variable models. 15.11. Model assumptions. 15.12.
The p-variable linear model. 15.13. The Gaussian multiple regression
likelihood. 15.14. Interactions. 15.15. Ridge regression, the Lasso and the
"elastic net". 15.16. Modelling boy birthweights. 15.17. Modelling girl
intelligence at age 10 and family income 15.18. Modelling of the hostility
data. 15.19. Principal component regression. 16. Incomplete data and their
analysis with the EM and DA algorithms. 16.1. The general incomplete data
model. 16.2. The EM algorithm. 16.3. Missingness. 16.4. Lost data. 16.5.
Censoring in the exponential distribution. 16.6. Randomly missing Gaussian
observations. 16.7. Missing responses and/or covariates in simple and
multiple regression. 16.8. Mixture distributions. 16.9. Bayesian analysis
and the Data Augmentation algorithm. 17. Generalised linear models (GLMs).
17.1. The exponential family. 17.2. Maximum likelihood 17.3 The GLM
algorithm. 17.4. Bayesian package development. 17.5. Bayesian analysis from
ML. 17.6. Binary response models. 17.7. The menarche data. 17.8. Poisson
regression - fish species frequency. 17.9. Gamma regression. 18. Extensions
of GLMs. 18.1. Double GLMs. 18.2. Maximum likelihood. 18.3. Bayesian
analysis. 18.4. Segmented or broken-stick regressions. 18.5. Heterogeneous
regressions. 18.6. Highly non-linear functions. 18.7. Neural networks.
18.8. Social networks and social group membership. 18.9. The motorcycle
data. 19. Appendix 1 - length-biased sampling. 20. Appendix 2 -
Two-component Gaussian mixture. 21. Appendix 3 - StatLab Variables. 22.
Appendix 4 - a short history of statistics from 1890.
Analysis? 1.3. What is Statistical Inference? 1.4. Why this book? 1.5. Why
the focus on the Bayesian approach? 1.6. Coverage of this book. 1.7. Recent
changes in technology. 1.8. Aims of the course. 2. What is (or are) Big
Data? 3. Data and research studies. 3.1. Lifetimes of radio transceivers.
3.2. Clustering of V1 missile hits in South London. 3.3. Court case on
vaccination risk. 3.4. Clinical trial of Depepsen for the treatment of
duodenal ulcers. 3.5. Effectiveness of treatments for respiratory distress
in newborn babies. 3.6. Vitamin K. 3.7. Species counts. 3.8. Toxicology in
small animal experiments. 3.9. Incidence of Down's syndrome in four
regions. 3.10. Fish species in lakes. 3.11. Absence from school. 3.12.
Hostility in husbands of suicide attempters. 3.13. Tolerance of racial
intermarriage. 3.14. Hospital bed use. 3.15. Dugong growth. 3.16. Simulated
motorcycle collision. 3.17. Global warming. 3.18. Social group membership.
4. The StatLab data base. 4.1. Types of variables. 4.2. StatLab population
questions. 5. Sample surveys - should we believe what we read? 5.1. Women
and Love. 5.2. Would you have children? 5.3. Representative sampling. 5.4.
Bias in the Newsday sample. 5.5. Bias in the Women and Love sample. 6.
Probability. 6.1. Relative frequency. 6.2. Degree of belief. 6.3. StatLab
dice sampling. 6.4. Computer sampling. 6.5. Probability for sampling. 6.6.
Probability axioms. 6.7. Screening tests and Bayes's theorem. 6.8. The
misuse of probability in the Sally Clark case. 6.9. Random variables and
their probability distributions. 6.10. Sums of independent random
variables. 7. Statistical inference I - discrete distributions. 7.1.
Evidence-based policy. 7.2. The basis of statistical inference. 7.3. The
survey sampling approach. 7.4. Model-based inference theories. 7.5. The
likelihood function. 7.6. Binomial distribution. 7.7. Frequentist theory.
7.8. Bayesian theory. 7.9. Inferences from posterior sampling. 7.10. Sample
design. 7.11. Parameter transformations. 7.12. The Poisson distribution.
7.13. Categorical variables.7.14. Maximum likelihood. 7.15. Bayesian
analysis. 8. Comparison of binomials: the Randomised Clinical Trial. 8.1.
Definition. 8.2. Example - RCT of Depepsen for the treatment of duodenal
ulcers. 8.3. Monte Carlo simulation. 8.4. RCT continued. 8.5. Bayesian
hypothesis testing/model comparison. 8.6. Other measures of treatment
difference. 8.7. The ECMO trials. 9. Data visualisation. 9.1. The
histogram. 9.2. The empirical mass and cumulative distribution functions.
9.3. Probability models for continuous variables. 10. Statistical Inference
II - the continuous exponential, Gaussian and uniform distributions. 10.1.
The exponential distribution. 10.2. The exponential likelihood. 10.3.
Frequentist theory. 10.4. Bayesian theory. 10.5. The Gaussian distribution.
10.6. The Gaussian likelihood function. 10.7. Frequentist inference. 10.8.
Bayesian inference. 10.9. Hypothesis testing. 10.10. Frequentist hypothesis
testing. 10.11. Bayesian hypothesis testing. 10.12. Pivotal functions.
10.13. Conjugate priors. 10.14. The uniform distribution. 11. Statistical
Inference III - two-parameter continuous distributions. 11.1. The Gaussian
distribution. 11.2. Frequentist analysis. 11.3. Bayesian analysis. 11.4.
The lognormal distribution. 11.5. The Weibull distribution. 11.6. The gamma
distribution. 11.7. The gamma likelihood. 12. Model assessment. 12.1.
Gaussian model assessment. 12.2. Lognormal model assessment. 12.3.
Exponential model assessment. 12.4. Weibull model assessment. 12.5. Gamma
model assessment. 13. The multinomial distribution. 13.1. The multinomial
likelihood. 13.2. Frequentist analysis. 13.3. Bayesian analysis. 13.4.
Criticisms of the Haldane prior. 13.5. Inference for multinomial quantiles.
13.6. Dirichlet posterior weighting. 13.7. The frequentist bootstrap. 13.8.
Stratified sampling and weighting. 14. Model comparison and model
averaging. 14.4. The deviance. 14.5. Asymptotic distribution of the
deviance. 14.6. Nested models. 14.7. Model choice and model averaging. 15.
Gaussian linear regression models. 15.1. Simple linear regression. 15.2.
Model assessment through residual examination. 15.3. Likelihood for the
simple linear regression model. 15.4. Maximum likelihood. 15.5. Bayesian
and frequentist inferences. 15.6. Model-robust analysis. 15.7. Correlation
and prediction. 15.8. Probability model assessment. 15.9. "Dummy variable"
regression. 15.10. Two-variable models. 15.11. Model assumptions. 15.12.
The p-variable linear model. 15.13. The Gaussian multiple regression
likelihood. 15.14. Interactions. 15.15. Ridge regression, the Lasso and the
"elastic net". 15.16. Modelling boy birthweights. 15.17. Modelling girl
intelligence at age 10 and family income 15.18. Modelling of the hostility
data. 15.19. Principal component regression. 16. Incomplete data and their
analysis with the EM and DA algorithms. 16.1. The general incomplete data
model. 16.2. The EM algorithm. 16.3. Missingness. 16.4. Lost data. 16.5.
Censoring in the exponential distribution. 16.6. Randomly missing Gaussian
observations. 16.7. Missing responses and/or covariates in simple and
multiple regression. 16.8. Mixture distributions. 16.9. Bayesian analysis
and the Data Augmentation algorithm. 17. Generalised linear models (GLMs).
17.1. The exponential family. 17.2. Maximum likelihood 17.3 The GLM
algorithm. 17.4. Bayesian package development. 17.5. Bayesian analysis from
ML. 17.6. Binary response models. 17.7. The menarche data. 17.8. Poisson
regression - fish species frequency. 17.9. Gamma regression. 18. Extensions
of GLMs. 18.1. Double GLMs. 18.2. Maximum likelihood. 18.3. Bayesian
analysis. 18.4. Segmented or broken-stick regressions. 18.5. Heterogeneous
regressions. 18.6. Highly non-linear functions. 18.7. Neural networks.
18.8. Social networks and social group membership. 18.9. The motorcycle
data. 19. Appendix 1 - length-biased sampling. 20. Appendix 2 -
Two-component Gaussian mixture. 21. Appendix 3 - StatLab Variables. 22.
Appendix 4 - a short history of statistics from 1890.