Dorothy V M Bishop, Paul Thompson
Evaluating What Works
An Intuitive Guide to Intervention Research for Practitioners
Dorothy V M Bishop, Paul Thompson
Evaluating What Works
An Intuitive Guide to Intervention Research for Practitioners
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Objective intervention research is vital to improve outcomes, but this is a complex area, where it is all too easy to misinterpret evidence. This book uses practical examples to increase awareness of the numerous sources of bias that can lead to mistaken conclusions when evaluating interventions.
Andere Kunden interessierten sich auch für
- Leo H KahaneRegression Basics206,99 €
- Geoff CummingIntroduction to the New Statistics239,99 €
- Advanced Structural Equation Modeling204,99 €
- Jamie D RiggsHandbook for Applied Modeling: Non-Gaussian and Correlated Data140,99 €
- Longitudinal Models in the Behavioral and Related Sciences223,99 €
- Stephen VoltzThe Viral Video Manifesto: Why Everything You Know Is Wrong and How to Do What Really Works24,99 €
- Doing Qualitative Research in Psychology160,99 €
-
-
-
Objective intervention research is vital to improve outcomes, but this is a complex area, where it is all too easy to misinterpret evidence. This book uses practical examples to increase awareness of the numerous sources of bias that can lead to mistaken conclusions when evaluating interventions.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: CRC Press
- Seitenzahl: 212
- Erscheinungstermin: 7. Dezember 2023
- Englisch
- Abmessung: 234mm x 156mm x 14mm
- Gewicht: 508g
- ISBN-13: 9781032591209
- ISBN-10: 103259120X
- Artikelnr.: 69033612
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
- Verlag: CRC Press
- Seitenzahl: 212
- Erscheinungstermin: 7. Dezember 2023
- Englisch
- Abmessung: 234mm x 156mm x 14mm
- Gewicht: 508g
- ISBN-13: 9781032591209
- ISBN-10: 103259120X
- Artikelnr.: 69033612
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
Dorothy Bishop was Professor of Developmental Neuropsychology at the University of Oxford from 1998 to 2022. Dorothy is a Fellow of the Academy of Medical Sciences, a Fellow of the British Academy, and a Fellow of the Royal Society. She been recognised with Honorary Fellowships from the Royal College of Speech and Language Therapists, the British Psychological Society, and the Royal College of Pediatrics and Child Health. She has Honorary Doctorates from the Universities of Newcastle upon Tyne, UK, Western Australia, Lund, Sweden, École Normale Supérieure, Paris, and Liège, Belgium. She is an Honorary Fellow of St John's College, Oxford. Paul Thompson is an Assistant Professor in Applied Statistics and the department lead for statistics and quantitative methods at the Centre for Educational Development, Appraisal and Research (CEDAR) at the University of Warwick. Between 2014 and 2021 he worked at Oxford University within the Department of Experimental Psychology, working on a wide range of projects including behavioural, genetics, and neuroimaging (brain scanning) studies in developmental language disorders such as Dyslexia, and Developmental Language Disorder, and language development in those with learning and developmental disabilities, such as Down Syndrome and Autism.
1. Introduction 2. Why observational studies can be misleading 3. How to
select an outcome measure 4. Improvement due to nonspecific effects of
intervention 5. Limitations of the pre-post design: biases related to
systematic change 6. Estimating unwanted effects with a control group 7.
Controlling for selection bias: randomized assignment to intervention 8.
The researcher as a source of bias 9. Further potential for bias:
volunteers, dropouts, and missing data 10. The randomized controlled trial
as a method for controlling biases 11. The importance of variation 12.
Analysis of a two-group RCT 13. How big a sample do I need? Statistical
power and type II errors 14. False positives, p-hacking and multiple
comparisons 15. Drawbacks of the two-arm RCT 16. Moderators and mediators
of intervention effects 17. Adaptive Designs 18. Cluster Randomized
Controlled Trials 19. Cross-over designs 20. Single case designs 21. Can
you trust the published literature? 22. Pre-registration and Registered
Reports 23. Reviewing the literature before you start 24. Putting it all
together 25. Comments on exercises 26. References
select an outcome measure 4. Improvement due to nonspecific effects of
intervention 5. Limitations of the pre-post design: biases related to
systematic change 6. Estimating unwanted effects with a control group 7.
Controlling for selection bias: randomized assignment to intervention 8.
The researcher as a source of bias 9. Further potential for bias:
volunteers, dropouts, and missing data 10. The randomized controlled trial
as a method for controlling biases 11. The importance of variation 12.
Analysis of a two-group RCT 13. How big a sample do I need? Statistical
power and type II errors 14. False positives, p-hacking and multiple
comparisons 15. Drawbacks of the two-arm RCT 16. Moderators and mediators
of intervention effects 17. Adaptive Designs 18. Cluster Randomized
Controlled Trials 19. Cross-over designs 20. Single case designs 21. Can
you trust the published literature? 22. Pre-registration and Registered
Reports 23. Reviewing the literature before you start 24. Putting it all
together 25. Comments on exercises 26. References
1. Introduction 2. Why observational studies can be misleading 3. How to
select an outcome measure 4. Improvement due to nonspecific effects of
intervention 5. Limitations of the pre-post design: biases related to
systematic change 6. Estimating unwanted effects with a control group 7.
Controlling for selection bias: randomized assignment to intervention 8.
The researcher as a source of bias 9. Further potential for bias:
volunteers, dropouts, and missing data 10. The randomized controlled trial
as a method for controlling biases 11. The importance of variation 12.
Analysis of a two-group RCT 13. How big a sample do I need? Statistical
power and type II errors 14. False positives, p-hacking and multiple
comparisons 15. Drawbacks of the two-arm RCT 16. Moderators and mediators
of intervention effects 17. Adaptive Designs 18. Cluster Randomized
Controlled Trials 19. Cross-over designs 20. Single case designs 21. Can
you trust the published literature? 22. Pre-registration and Registered
Reports 23. Reviewing the literature before you start 24. Putting it all
together 25. Comments on exercises 26. References
select an outcome measure 4. Improvement due to nonspecific effects of
intervention 5. Limitations of the pre-post design: biases related to
systematic change 6. Estimating unwanted effects with a control group 7.
Controlling for selection bias: randomized assignment to intervention 8.
The researcher as a source of bias 9. Further potential for bias:
volunteers, dropouts, and missing data 10. The randomized controlled trial
as a method for controlling biases 11. The importance of variation 12.
Analysis of a two-group RCT 13. How big a sample do I need? Statistical
power and type II errors 14. False positives, p-hacking and multiple
comparisons 15. Drawbacks of the two-arm RCT 16. Moderators and mediators
of intervention effects 17. Adaptive Designs 18. Cluster Randomized
Controlled Trials 19. Cross-over designs 20. Single case designs 21. Can
you trust the published literature? 22. Pre-registration and Registered
Reports 23. Reviewing the literature before you start 24. Putting it all
together 25. Comments on exercises 26. References