Richard B Darlington, Andrew F Hayes
Regression Analysis and Linear Models
Concepts, Applications, and Implementation
Richard B Darlington, Andrew F Hayes
Regression Analysis and Linear Models
Concepts, Applications, and Implementation
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Emphasizing conceptual understanding over mathematics, this user-friendly text introduces linear regression analysis to students and researchers across the social, behavioral, consumer, and health sciences.
Andere Kunden interessierten sich auch für
- Elena LlaudetData Analysis for Social Science148,99 €
- David De VausAnalyzing Social Science Data: 50 Key Problems in Data Analysis264,99 €
- Prue ChamberlayneThe Turn to Biographical Methods in Social Science202,99 €
- Fons J R van de VijverMultilevel Analysis of Individuals and Cultures189,99 €
- Wim J. van der LindenLinear Models for Optimal Test Design122,99 €
- Scott M. LynchIntroduction to Applied Bayesian Statistics and Estimation for Social Scientists125,99 €
- E J MishanElements of Cost-Benefit Analysis (Routledge Revivals)181,99 €
-
-
-
Emphasizing conceptual understanding over mathematics, this user-friendly text introduces linear regression analysis to students and researchers across the social, behavioral, consumer, and health sciences.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Guilford Publications
- Seitenzahl: 661
- Erscheinungstermin: 27. September 2016
- Englisch
- Abmessung: 261mm x 182mm x 38mm
- Gewicht: 1328g
- ISBN-13: 9781462521135
- ISBN-10: 1462521134
- Artikelnr.: 44971691
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- Verlag: Guilford Publications
- Seitenzahl: 661
- Erscheinungstermin: 27. September 2016
- Englisch
- Abmessung: 261mm x 182mm x 38mm
- Gewicht: 1328g
- ISBN-13: 9781462521135
- ISBN-10: 1462521134
- Artikelnr.: 44971691
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
Richard B. Darlington, PhD, is Emeritus Professor of Psychology at Cornell University. He is a Fellow of the American Association for the Advancement of Science and has published extensively on regression and related methods, the cultural bias of mental tests, the long-term effects of preschool programs, and, most recently, the neuroscience of brain development and evolution. Andrew F. Hayes, PhD, is Distinguished Research Professor in the Haskayne School of Business at the University of Calgary, Alberta, Canada. His research and writing on data analysis has been published widely. Dr. Hayes is the author of Introduction to Mediation, Moderation, and Conditional Process Analysis and Statistical Methods for Communication Science, as well as coauthor, with Richard B. Darlington, of Regression Analysis and Linear Models. He teaches data analysis, primarily at the graduate level, and frequently conducts workshops on statistical analysis throughout the world. His website is www.afhayes.com.
List of Symbols and Abbreviations
1. Statistical Control and Linear Models
1.1 Statistical Control
1.1.1 The Need for Control
1.1.2 Five Methods of Control
1.1.3 Examples of Statistical Control
1.2 An Overview of Linear Models
1.2.1 What You Should Know Already
1.2.2 Statistical Software for Linear Modeling and Statistical Control
1.2.3 About Formulas
1.2.4 On Symbolic Representations
1.3 Chapter Summary
2. The Simple Regression Model
2.1 Scatterplots and Conditional Distributions
2.1.1 Scatterplots
2.1.2 A Line through Conditional Means
2.1.3 Errors of Estimate
2.2 The Simple Regression Model
2.2.1 The Regression Line
2.2.2 Variance, Covariance, and Correlation
2.2.3 Finding the Regression Line
2.2.4 Example Computations
2.2.5 Linear Regression Analysis by Computer
2.3 The Regression Coefficient versus the Correlation Coefficient
2.3.1 Properties of the Regression and Correlation Coefficients
2.3.2 Uses of the Regression and Correlation Coefficients
2.4 Residuals
2.4.1 The Three Components of Y
2.4.2 Algebraic Properties of Residuals
2.4.3 Residuals as Y Adjusted for Differences in X
2.4.4 Residual Analysis
2.5 Chapter Summary
3. Partial Relationship and the Multiple Regression Model
3.1 Regression Analysis with More Than One Predictor Variable
3.1.1 An Example
3.1.2 Regressors
3.1.3 Models
3.1.4 Representing a Model Geometrically
3.1.5 Model Errors
3.1.6 An Alternative View of the Model
3.2 The Best-Fitting Model
3.2.1 Model Estimation with Computer Software
3.2.2 Partial Regression Coefficients
3.2.3 The Regression Constant
3.2.4 Problems with Three or More Regressors
3.2.5 The Multiple Correlation R
3.3 Scale-Free Measures of Partial Association
3.3.1 Semipartial Correlation
3.3.2 Partial Correlation
3.3.3 The Standardized Regression Coefficient
3.4 Some Relations among Statistics
3.4.1 Relations among Simple, Multiple, Partial, and Semipartial
Correlations
3.4.2 Venn Diagrams
3.4.3 Partial Relationships and Simple Relationships May Have Different
Signs
3.4.4 How Covariates Affect Regression Coefficients
3.4.5 Formulas for bj, prj, srj, and R
3.5 Chapter Summary
4. Statistical Inference in Regression
4.1 Concepts in Statistical Inference
4.1.1 Statistics and Parameters
4.1.2 Assumptions for Proper Inference
4.1.3 Expected Values and Unbiased Estimation
4.2 The ANOVA Summary Table
4.2.1 Data = Model + Error
4.2.2 Total and Regression Sums of Squares
4.2.3 Degrees of Freedom
4.2.4 Mean Squares
4.3 Inference about the Multiple Correlation
4.3.1 Biased and Less Biased Estimation of TR2
4.3.2 Testing a Hypothesis about TR
4.4 The Distribution of and Inference about a Partial Regression
Coefficient
4.4.1 Testing a Null Hypothesis about Tbj
4.4.2 Interval Estimates for Tbj
4.4.3 Factors Affecting the Standard Error of bj
4.4.4 Tolerance
4.5 Inferences about Partial Correlations
4.5.1 Testing a Null Hypothesis about Tprj and Tsrj
4.5.2 Other Inferences about Partial Correlations
4.6 Inferences about Conditional Means
4.7 Miscellaneous Issues in Inference
4.7.1 How Great a Drawback Is Collinearity?
4.7.2 Contradicting Inferences
4.7.3 Sample Size and Nonsignificant Covariates
4.7.4 Inference in Simple Regression (When k = 1)
4.8 Chapter Summary
5. Extending Regression Analysis Principles
5.1 Dichotomous Regressors
5.1.1 Indicator or Dummy Variables
5.1.2 Y Is a Group Mean
5.1.3 The Regression Coefficient for an Indicator Is a Difference
5.1.4 A Graphic Representation
5.1.5 A Caution about Standardized Regression Coefficients for Dichotomous
Regressors
5.1.6 Artificial Categorization of Numerical Variables
5.2 Regression to the Mean
5.2.1 How Regression Got Its Name
5.2.2 The Phenomenon
5.2.3 Versions of the Phenomenon
5.2.4 Misconceptions and Mistakes Fostered by Regression to the Mean
5.2.5 Accounting for Regression to the Mean Using Linear Models
5.3 Multidimensional Sets
5.3.1 The Partial and Semipartial Multiple Correlation
5.3.2 What It Means If PR = 0 or SR = 0
5.3.3 Inference Concerning Sets of Variables
5.4 A Glance at the Big Picture
5.4.1 Further Extensions of Regression
5.4.2 Some Difficulties and Limitations
5.5 Chapter Summary
6. Statistical versus Experimental Control
6.1 Why Random Assignment?
6.1.1 Limitations of Statistical Control
6.1.2 The Advantage of Random Assignment
6.1.3 The Meaning of Random Assignment
6.2 Limitations of Random Assignment
6.2.1 Limitations Common to Statistical Control and Random Assignment
6.2.2 Limitations Specific to Random Assignment
6.2.3 Correlation and Causation
6.3 Supplementing Random Assignment with Statistical Control
6.3.1 Increased Precision and Power
6.3.2 Invulnerability to Chance Differences between Groups
6.3.3 Quantifying and Assessing Indirect Effects
6.4 Chapter Summary
7. Regression for Prediction
7.1 Mechanical Prediction and Regression
7.1.1 The Advantages of Mechanical Prediction
7.1.2 Regression as a Mechanical Prediction Method
7.1.3 A Focus on R Rather Than the Regression Weights
7.2 Estimating True Validity
7.2.1 Shrunken versus Adjusted R
7.2.2 Estimating TRS
7.2.3 Shrunken R Using Statistical Software
7.3 Selecting Predictor Variables
7.3.1 Stepwise Regression
7.3.2 All Subsets Regression
7.3.3 How Do Variable Selection Methods Perform?
7.4 Predictor Variable Configurations
7.4.1 Partial Redundancy (the Standard Configuration)
7.4.2 Complete Redundancy
7.4.3 Independence
7.4.4 Complementarity
7.4.5 Suppression
7.4.6 How These Configurations Relate to the Correlation between Predictors
7.4.7 Configurations of Three or More Predictors
7.5 Revisiting the Value of Human Judgment
7.6 Chapter Summary
8. Assessing the Importance of Regressors
8.1 What Does It Mean for a Variable to Be Important?
8.1.1 Variable Importance in Substantive or Applied Terms
8.1.2 Variable Importance in Statistical Terms
8.2 Should Correlations Be Squared?
8.2.1 Decision Theory
8.2.2 Small Squared Correlations Can Reflect Noteworthy Effects
8.2.3 Pearson's r as the Ratio of a Regression Coefficient to Its Maximum
Possible Value
8.2.4 Proportional Reduction in Estimation Error
8.2.5 When the Standard Is Perfection
8.2.6 Summary
8.3 Determining the Relative Importance of Regressors in a Single
Regression Model
8.3.1 The Limitations of the Standardized Regression Coefficient
8.3.2 The Advantage of the Semipartial Correlation
8.3.3 Some Equivalences among Measures
8.3.4 Cohen's f 2
8.3.5 Comparing Two Regression Coefficients in the Same Model
8.4 Dominance Analysis
8.4.1 Complete and Partial Dominance
8.4.2 Example Computations
8.4.3 Dominance Analysis Using a Regression Program
8.5 Chapter Summary
9. Multicategorical Regressors
9.1 Multicategorical Variables as Sets
9.1.1 Indicator (Dummy) Coding
9.1.2 Constructing Indicator Variables
9.1.3 The Reference Category
9.1.4 Testing the Equality of Several Means
9.1.5 Parallels with Analysis of Variance
9.1.6 Interpreting Estimated Y and the Regression Coefficients
9.2 Multicategorical Regressors as or with Covariates
1. Statistical Control and Linear Models
1.1 Statistical Control
1.1.1 The Need for Control
1.1.2 Five Methods of Control
1.1.3 Examples of Statistical Control
1.2 An Overview of Linear Models
1.2.1 What You Should Know Already
1.2.2 Statistical Software for Linear Modeling and Statistical Control
1.2.3 About Formulas
1.2.4 On Symbolic Representations
1.3 Chapter Summary
2. The Simple Regression Model
2.1 Scatterplots and Conditional Distributions
2.1.1 Scatterplots
2.1.2 A Line through Conditional Means
2.1.3 Errors of Estimate
2.2 The Simple Regression Model
2.2.1 The Regression Line
2.2.2 Variance, Covariance, and Correlation
2.2.3 Finding the Regression Line
2.2.4 Example Computations
2.2.5 Linear Regression Analysis by Computer
2.3 The Regression Coefficient versus the Correlation Coefficient
2.3.1 Properties of the Regression and Correlation Coefficients
2.3.2 Uses of the Regression and Correlation Coefficients
2.4 Residuals
2.4.1 The Three Components of Y
2.4.2 Algebraic Properties of Residuals
2.4.3 Residuals as Y Adjusted for Differences in X
2.4.4 Residual Analysis
2.5 Chapter Summary
3. Partial Relationship and the Multiple Regression Model
3.1 Regression Analysis with More Than One Predictor Variable
3.1.1 An Example
3.1.2 Regressors
3.1.3 Models
3.1.4 Representing a Model Geometrically
3.1.5 Model Errors
3.1.6 An Alternative View of the Model
3.2 The Best-Fitting Model
3.2.1 Model Estimation with Computer Software
3.2.2 Partial Regression Coefficients
3.2.3 The Regression Constant
3.2.4 Problems with Three or More Regressors
3.2.5 The Multiple Correlation R
3.3 Scale-Free Measures of Partial Association
3.3.1 Semipartial Correlation
3.3.2 Partial Correlation
3.3.3 The Standardized Regression Coefficient
3.4 Some Relations among Statistics
3.4.1 Relations among Simple, Multiple, Partial, and Semipartial
Correlations
3.4.2 Venn Diagrams
3.4.3 Partial Relationships and Simple Relationships May Have Different
Signs
3.4.4 How Covariates Affect Regression Coefficients
3.4.5 Formulas for bj, prj, srj, and R
3.5 Chapter Summary
4. Statistical Inference in Regression
4.1 Concepts in Statistical Inference
4.1.1 Statistics and Parameters
4.1.2 Assumptions for Proper Inference
4.1.3 Expected Values and Unbiased Estimation
4.2 The ANOVA Summary Table
4.2.1 Data = Model + Error
4.2.2 Total and Regression Sums of Squares
4.2.3 Degrees of Freedom
4.2.4 Mean Squares
4.3 Inference about the Multiple Correlation
4.3.1 Biased and Less Biased Estimation of TR2
4.3.2 Testing a Hypothesis about TR
4.4 The Distribution of and Inference about a Partial Regression
Coefficient
4.4.1 Testing a Null Hypothesis about Tbj
4.4.2 Interval Estimates for Tbj
4.4.3 Factors Affecting the Standard Error of bj
4.4.4 Tolerance
4.5 Inferences about Partial Correlations
4.5.1 Testing a Null Hypothesis about Tprj and Tsrj
4.5.2 Other Inferences about Partial Correlations
4.6 Inferences about Conditional Means
4.7 Miscellaneous Issues in Inference
4.7.1 How Great a Drawback Is Collinearity?
4.7.2 Contradicting Inferences
4.7.3 Sample Size and Nonsignificant Covariates
4.7.4 Inference in Simple Regression (When k = 1)
4.8 Chapter Summary
5. Extending Regression Analysis Principles
5.1 Dichotomous Regressors
5.1.1 Indicator or Dummy Variables
5.1.2 Y Is a Group Mean
5.1.3 The Regression Coefficient for an Indicator Is a Difference
5.1.4 A Graphic Representation
5.1.5 A Caution about Standardized Regression Coefficients for Dichotomous
Regressors
5.1.6 Artificial Categorization of Numerical Variables
5.2 Regression to the Mean
5.2.1 How Regression Got Its Name
5.2.2 The Phenomenon
5.2.3 Versions of the Phenomenon
5.2.4 Misconceptions and Mistakes Fostered by Regression to the Mean
5.2.5 Accounting for Regression to the Mean Using Linear Models
5.3 Multidimensional Sets
5.3.1 The Partial and Semipartial Multiple Correlation
5.3.2 What It Means If PR = 0 or SR = 0
5.3.3 Inference Concerning Sets of Variables
5.4 A Glance at the Big Picture
5.4.1 Further Extensions of Regression
5.4.2 Some Difficulties and Limitations
5.5 Chapter Summary
6. Statistical versus Experimental Control
6.1 Why Random Assignment?
6.1.1 Limitations of Statistical Control
6.1.2 The Advantage of Random Assignment
6.1.3 The Meaning of Random Assignment
6.2 Limitations of Random Assignment
6.2.1 Limitations Common to Statistical Control and Random Assignment
6.2.2 Limitations Specific to Random Assignment
6.2.3 Correlation and Causation
6.3 Supplementing Random Assignment with Statistical Control
6.3.1 Increased Precision and Power
6.3.2 Invulnerability to Chance Differences between Groups
6.3.3 Quantifying and Assessing Indirect Effects
6.4 Chapter Summary
7. Regression for Prediction
7.1 Mechanical Prediction and Regression
7.1.1 The Advantages of Mechanical Prediction
7.1.2 Regression as a Mechanical Prediction Method
7.1.3 A Focus on R Rather Than the Regression Weights
7.2 Estimating True Validity
7.2.1 Shrunken versus Adjusted R
7.2.2 Estimating TRS
7.2.3 Shrunken R Using Statistical Software
7.3 Selecting Predictor Variables
7.3.1 Stepwise Regression
7.3.2 All Subsets Regression
7.3.3 How Do Variable Selection Methods Perform?
7.4 Predictor Variable Configurations
7.4.1 Partial Redundancy (the Standard Configuration)
7.4.2 Complete Redundancy
7.4.3 Independence
7.4.4 Complementarity
7.4.5 Suppression
7.4.6 How These Configurations Relate to the Correlation between Predictors
7.4.7 Configurations of Three or More Predictors
7.5 Revisiting the Value of Human Judgment
7.6 Chapter Summary
8. Assessing the Importance of Regressors
8.1 What Does It Mean for a Variable to Be Important?
8.1.1 Variable Importance in Substantive or Applied Terms
8.1.2 Variable Importance in Statistical Terms
8.2 Should Correlations Be Squared?
8.2.1 Decision Theory
8.2.2 Small Squared Correlations Can Reflect Noteworthy Effects
8.2.3 Pearson's r as the Ratio of a Regression Coefficient to Its Maximum
Possible Value
8.2.4 Proportional Reduction in Estimation Error
8.2.5 When the Standard Is Perfection
8.2.6 Summary
8.3 Determining the Relative Importance of Regressors in a Single
Regression Model
8.3.1 The Limitations of the Standardized Regression Coefficient
8.3.2 The Advantage of the Semipartial Correlation
8.3.3 Some Equivalences among Measures
8.3.4 Cohen's f 2
8.3.5 Comparing Two Regression Coefficients in the Same Model
8.4 Dominance Analysis
8.4.1 Complete and Partial Dominance
8.4.2 Example Computations
8.4.3 Dominance Analysis Using a Regression Program
8.5 Chapter Summary
9. Multicategorical Regressors
9.1 Multicategorical Variables as Sets
9.1.1 Indicator (Dummy) Coding
9.1.2 Constructing Indicator Variables
9.1.3 The Reference Category
9.1.4 Testing the Equality of Several Means
9.1.5 Parallels with Analysis of Variance
9.1.6 Interpreting Estimated Y and the Regression Coefficients
9.2 Multicategorical Regressors as or with Covariates
List of Symbols and Abbreviations
1. Statistical Control and Linear Models
1.1 Statistical Control
1.1.1 The Need for Control
1.1.2 Five Methods of Control
1.1.3 Examples of Statistical Control
1.2 An Overview of Linear Models
1.2.1 What You Should Know Already
1.2.2 Statistical Software for Linear Modeling and Statistical Control
1.2.3 About Formulas
1.2.4 On Symbolic Representations
1.3 Chapter Summary
2. The Simple Regression Model
2.1 Scatterplots and Conditional Distributions
2.1.1 Scatterplots
2.1.2 A Line through Conditional Means
2.1.3 Errors of Estimate
2.2 The Simple Regression Model
2.2.1 The Regression Line
2.2.2 Variance, Covariance, and Correlation
2.2.3 Finding the Regression Line
2.2.4 Example Computations
2.2.5 Linear Regression Analysis by Computer
2.3 The Regression Coefficient versus the Correlation Coefficient
2.3.1 Properties of the Regression and Correlation Coefficients
2.3.2 Uses of the Regression and Correlation Coefficients
2.4 Residuals
2.4.1 The Three Components of Y
2.4.2 Algebraic Properties of Residuals
2.4.3 Residuals as Y Adjusted for Differences in X
2.4.4 Residual Analysis
2.5 Chapter Summary
3. Partial Relationship and the Multiple Regression Model
3.1 Regression Analysis with More Than One Predictor Variable
3.1.1 An Example
3.1.2 Regressors
3.1.3 Models
3.1.4 Representing a Model Geometrically
3.1.5 Model Errors
3.1.6 An Alternative View of the Model
3.2 The Best-Fitting Model
3.2.1 Model Estimation with Computer Software
3.2.2 Partial Regression Coefficients
3.2.3 The Regression Constant
3.2.4 Problems with Three or More Regressors
3.2.5 The Multiple Correlation R
3.3 Scale-Free Measures of Partial Association
3.3.1 Semipartial Correlation
3.3.2 Partial Correlation
3.3.3 The Standardized Regression Coefficient
3.4 Some Relations among Statistics
3.4.1 Relations among Simple, Multiple, Partial, and Semipartial
Correlations
3.4.2 Venn Diagrams
3.4.3 Partial Relationships and Simple Relationships May Have Different
Signs
3.4.4 How Covariates Affect Regression Coefficients
3.4.5 Formulas for bj, prj, srj, and R
3.5 Chapter Summary
4. Statistical Inference in Regression
4.1 Concepts in Statistical Inference
4.1.1 Statistics and Parameters
4.1.2 Assumptions for Proper Inference
4.1.3 Expected Values and Unbiased Estimation
4.2 The ANOVA Summary Table
4.2.1 Data = Model + Error
4.2.2 Total and Regression Sums of Squares
4.2.3 Degrees of Freedom
4.2.4 Mean Squares
4.3 Inference about the Multiple Correlation
4.3.1 Biased and Less Biased Estimation of TR2
4.3.2 Testing a Hypothesis about TR
4.4 The Distribution of and Inference about a Partial Regression
Coefficient
4.4.1 Testing a Null Hypothesis about Tbj
4.4.2 Interval Estimates for Tbj
4.4.3 Factors Affecting the Standard Error of bj
4.4.4 Tolerance
4.5 Inferences about Partial Correlations
4.5.1 Testing a Null Hypothesis about Tprj and Tsrj
4.5.2 Other Inferences about Partial Correlations
4.6 Inferences about Conditional Means
4.7 Miscellaneous Issues in Inference
4.7.1 How Great a Drawback Is Collinearity?
4.7.2 Contradicting Inferences
4.7.3 Sample Size and Nonsignificant Covariates
4.7.4 Inference in Simple Regression (When k = 1)
4.8 Chapter Summary
5. Extending Regression Analysis Principles
5.1 Dichotomous Regressors
5.1.1 Indicator or Dummy Variables
5.1.2 Y Is a Group Mean
5.1.3 The Regression Coefficient for an Indicator Is a Difference
5.1.4 A Graphic Representation
5.1.5 A Caution about Standardized Regression Coefficients for Dichotomous
Regressors
5.1.6 Artificial Categorization of Numerical Variables
5.2 Regression to the Mean
5.2.1 How Regression Got Its Name
5.2.2 The Phenomenon
5.2.3 Versions of the Phenomenon
5.2.4 Misconceptions and Mistakes Fostered by Regression to the Mean
5.2.5 Accounting for Regression to the Mean Using Linear Models
5.3 Multidimensional Sets
5.3.1 The Partial and Semipartial Multiple Correlation
5.3.2 What It Means If PR = 0 or SR = 0
5.3.3 Inference Concerning Sets of Variables
5.4 A Glance at the Big Picture
5.4.1 Further Extensions of Regression
5.4.2 Some Difficulties and Limitations
5.5 Chapter Summary
6. Statistical versus Experimental Control
6.1 Why Random Assignment?
6.1.1 Limitations of Statistical Control
6.1.2 The Advantage of Random Assignment
6.1.3 The Meaning of Random Assignment
6.2 Limitations of Random Assignment
6.2.1 Limitations Common to Statistical Control and Random Assignment
6.2.2 Limitations Specific to Random Assignment
6.2.3 Correlation and Causation
6.3 Supplementing Random Assignment with Statistical Control
6.3.1 Increased Precision and Power
6.3.2 Invulnerability to Chance Differences between Groups
6.3.3 Quantifying and Assessing Indirect Effects
6.4 Chapter Summary
7. Regression for Prediction
7.1 Mechanical Prediction and Regression
7.1.1 The Advantages of Mechanical Prediction
7.1.2 Regression as a Mechanical Prediction Method
7.1.3 A Focus on R Rather Than the Regression Weights
7.2 Estimating True Validity
7.2.1 Shrunken versus Adjusted R
7.2.2 Estimating TRS
7.2.3 Shrunken R Using Statistical Software
7.3 Selecting Predictor Variables
7.3.1 Stepwise Regression
7.3.2 All Subsets Regression
7.3.3 How Do Variable Selection Methods Perform?
7.4 Predictor Variable Configurations
7.4.1 Partial Redundancy (the Standard Configuration)
7.4.2 Complete Redundancy
7.4.3 Independence
7.4.4 Complementarity
7.4.5 Suppression
7.4.6 How These Configurations Relate to the Correlation between Predictors
7.4.7 Configurations of Three or More Predictors
7.5 Revisiting the Value of Human Judgment
7.6 Chapter Summary
8. Assessing the Importance of Regressors
8.1 What Does It Mean for a Variable to Be Important?
8.1.1 Variable Importance in Substantive or Applied Terms
8.1.2 Variable Importance in Statistical Terms
8.2 Should Correlations Be Squared?
8.2.1 Decision Theory
8.2.2 Small Squared Correlations Can Reflect Noteworthy Effects
8.2.3 Pearson's r as the Ratio of a Regression Coefficient to Its Maximum
Possible Value
8.2.4 Proportional Reduction in Estimation Error
8.2.5 When the Standard Is Perfection
8.2.6 Summary
8.3 Determining the Relative Importance of Regressors in a Single
Regression Model
8.3.1 The Limitations of the Standardized Regression Coefficient
8.3.2 The Advantage of the Semipartial Correlation
8.3.3 Some Equivalences among Measures
8.3.4 Cohen's f 2
8.3.5 Comparing Two Regression Coefficients in the Same Model
8.4 Dominance Analysis
8.4.1 Complete and Partial Dominance
8.4.2 Example Computations
8.4.3 Dominance Analysis Using a Regression Program
8.5 Chapter Summary
9. Multicategorical Regressors
9.1 Multicategorical Variables as Sets
9.1.1 Indicator (Dummy) Coding
9.1.2 Constructing Indicator Variables
9.1.3 The Reference Category
9.1.4 Testing the Equality of Several Means
9.1.5 Parallels with Analysis of Variance
9.1.6 Interpreting Estimated Y and the Regression Coefficients
9.2 Multicategorical Regressors as or with Covariates
1. Statistical Control and Linear Models
1.1 Statistical Control
1.1.1 The Need for Control
1.1.2 Five Methods of Control
1.1.3 Examples of Statistical Control
1.2 An Overview of Linear Models
1.2.1 What You Should Know Already
1.2.2 Statistical Software for Linear Modeling and Statistical Control
1.2.3 About Formulas
1.2.4 On Symbolic Representations
1.3 Chapter Summary
2. The Simple Regression Model
2.1 Scatterplots and Conditional Distributions
2.1.1 Scatterplots
2.1.2 A Line through Conditional Means
2.1.3 Errors of Estimate
2.2 The Simple Regression Model
2.2.1 The Regression Line
2.2.2 Variance, Covariance, and Correlation
2.2.3 Finding the Regression Line
2.2.4 Example Computations
2.2.5 Linear Regression Analysis by Computer
2.3 The Regression Coefficient versus the Correlation Coefficient
2.3.1 Properties of the Regression and Correlation Coefficients
2.3.2 Uses of the Regression and Correlation Coefficients
2.4 Residuals
2.4.1 The Three Components of Y
2.4.2 Algebraic Properties of Residuals
2.4.3 Residuals as Y Adjusted for Differences in X
2.4.4 Residual Analysis
2.5 Chapter Summary
3. Partial Relationship and the Multiple Regression Model
3.1 Regression Analysis with More Than One Predictor Variable
3.1.1 An Example
3.1.2 Regressors
3.1.3 Models
3.1.4 Representing a Model Geometrically
3.1.5 Model Errors
3.1.6 An Alternative View of the Model
3.2 The Best-Fitting Model
3.2.1 Model Estimation with Computer Software
3.2.2 Partial Regression Coefficients
3.2.3 The Regression Constant
3.2.4 Problems with Three or More Regressors
3.2.5 The Multiple Correlation R
3.3 Scale-Free Measures of Partial Association
3.3.1 Semipartial Correlation
3.3.2 Partial Correlation
3.3.3 The Standardized Regression Coefficient
3.4 Some Relations among Statistics
3.4.1 Relations among Simple, Multiple, Partial, and Semipartial
Correlations
3.4.2 Venn Diagrams
3.4.3 Partial Relationships and Simple Relationships May Have Different
Signs
3.4.4 How Covariates Affect Regression Coefficients
3.4.5 Formulas for bj, prj, srj, and R
3.5 Chapter Summary
4. Statistical Inference in Regression
4.1 Concepts in Statistical Inference
4.1.1 Statistics and Parameters
4.1.2 Assumptions for Proper Inference
4.1.3 Expected Values and Unbiased Estimation
4.2 The ANOVA Summary Table
4.2.1 Data = Model + Error
4.2.2 Total and Regression Sums of Squares
4.2.3 Degrees of Freedom
4.2.4 Mean Squares
4.3 Inference about the Multiple Correlation
4.3.1 Biased and Less Biased Estimation of TR2
4.3.2 Testing a Hypothesis about TR
4.4 The Distribution of and Inference about a Partial Regression
Coefficient
4.4.1 Testing a Null Hypothesis about Tbj
4.4.2 Interval Estimates for Tbj
4.4.3 Factors Affecting the Standard Error of bj
4.4.4 Tolerance
4.5 Inferences about Partial Correlations
4.5.1 Testing a Null Hypothesis about Tprj and Tsrj
4.5.2 Other Inferences about Partial Correlations
4.6 Inferences about Conditional Means
4.7 Miscellaneous Issues in Inference
4.7.1 How Great a Drawback Is Collinearity?
4.7.2 Contradicting Inferences
4.7.3 Sample Size and Nonsignificant Covariates
4.7.4 Inference in Simple Regression (When k = 1)
4.8 Chapter Summary
5. Extending Regression Analysis Principles
5.1 Dichotomous Regressors
5.1.1 Indicator or Dummy Variables
5.1.2 Y Is a Group Mean
5.1.3 The Regression Coefficient for an Indicator Is a Difference
5.1.4 A Graphic Representation
5.1.5 A Caution about Standardized Regression Coefficients for Dichotomous
Regressors
5.1.6 Artificial Categorization of Numerical Variables
5.2 Regression to the Mean
5.2.1 How Regression Got Its Name
5.2.2 The Phenomenon
5.2.3 Versions of the Phenomenon
5.2.4 Misconceptions and Mistakes Fostered by Regression to the Mean
5.2.5 Accounting for Regression to the Mean Using Linear Models
5.3 Multidimensional Sets
5.3.1 The Partial and Semipartial Multiple Correlation
5.3.2 What It Means If PR = 0 or SR = 0
5.3.3 Inference Concerning Sets of Variables
5.4 A Glance at the Big Picture
5.4.1 Further Extensions of Regression
5.4.2 Some Difficulties and Limitations
5.5 Chapter Summary
6. Statistical versus Experimental Control
6.1 Why Random Assignment?
6.1.1 Limitations of Statistical Control
6.1.2 The Advantage of Random Assignment
6.1.3 The Meaning of Random Assignment
6.2 Limitations of Random Assignment
6.2.1 Limitations Common to Statistical Control and Random Assignment
6.2.2 Limitations Specific to Random Assignment
6.2.3 Correlation and Causation
6.3 Supplementing Random Assignment with Statistical Control
6.3.1 Increased Precision and Power
6.3.2 Invulnerability to Chance Differences between Groups
6.3.3 Quantifying and Assessing Indirect Effects
6.4 Chapter Summary
7. Regression for Prediction
7.1 Mechanical Prediction and Regression
7.1.1 The Advantages of Mechanical Prediction
7.1.2 Regression as a Mechanical Prediction Method
7.1.3 A Focus on R Rather Than the Regression Weights
7.2 Estimating True Validity
7.2.1 Shrunken versus Adjusted R
7.2.2 Estimating TRS
7.2.3 Shrunken R Using Statistical Software
7.3 Selecting Predictor Variables
7.3.1 Stepwise Regression
7.3.2 All Subsets Regression
7.3.3 How Do Variable Selection Methods Perform?
7.4 Predictor Variable Configurations
7.4.1 Partial Redundancy (the Standard Configuration)
7.4.2 Complete Redundancy
7.4.3 Independence
7.4.4 Complementarity
7.4.5 Suppression
7.4.6 How These Configurations Relate to the Correlation between Predictors
7.4.7 Configurations of Three or More Predictors
7.5 Revisiting the Value of Human Judgment
7.6 Chapter Summary
8. Assessing the Importance of Regressors
8.1 What Does It Mean for a Variable to Be Important?
8.1.1 Variable Importance in Substantive or Applied Terms
8.1.2 Variable Importance in Statistical Terms
8.2 Should Correlations Be Squared?
8.2.1 Decision Theory
8.2.2 Small Squared Correlations Can Reflect Noteworthy Effects
8.2.3 Pearson's r as the Ratio of a Regression Coefficient to Its Maximum
Possible Value
8.2.4 Proportional Reduction in Estimation Error
8.2.5 When the Standard Is Perfection
8.2.6 Summary
8.3 Determining the Relative Importance of Regressors in a Single
Regression Model
8.3.1 The Limitations of the Standardized Regression Coefficient
8.3.2 The Advantage of the Semipartial Correlation
8.3.3 Some Equivalences among Measures
8.3.4 Cohen's f 2
8.3.5 Comparing Two Regression Coefficients in the Same Model
8.4 Dominance Analysis
8.4.1 Complete and Partial Dominance
8.4.2 Example Computations
8.4.3 Dominance Analysis Using a Regression Program
8.5 Chapter Summary
9. Multicategorical Regressors
9.1 Multicategorical Variables as Sets
9.1.1 Indicator (Dummy) Coding
9.1.2 Constructing Indicator Variables
9.1.3 The Reference Category
9.1.4 Testing the Equality of Several Means
9.1.5 Parallels with Analysis of Variance
9.1.6 Interpreting Estimated Y and the Regression Coefficients
9.2 Multicategorical Regressors as or with Covariates