9780199753864

Methods Matter Improving Causal Inference in Educational and Social Science Research

by ;
  • ISBN13:

    9780199753864

  • ISBN10:

    0199753865

  • Format: Hardcover
  • Copyright: 2010-09-17
  • Publisher: Oxford University Press

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
  • We Buy This Book Back!
    In-Store Credit: $5.25
    Check/Direct Deposit: $5.00
List Price: $71.00 Save up to $28.40
  • Rent Book $42.60
    Add to Cart Free Shipping

    TERM
    PRICE
    DUE

Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
  • The Rental copy of this book is not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Summary

Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typically accompanied by well-formulated plans to evaluate their causal impacts. As a result, knowledge about what works in different situations has been very slow to accumulate. Over the last several decades, advances in research methodology, administrative record keeping, and statistical software have dramatically increased the potential for researchers to conduct compelling evaluations of the causal impacts of educational interventions, and the number of well-designed studies is growing. Written in clear, concise prose,Methods Matter: Improving Causal Inference in Educational and Social Science Researchoffers essential guidance for those who evaluate educational policies. Using numerous examples of high-quality studies that have evaluated the causal impacts of important educational interventions, the authors go beyond the simple presentation of new analytical methods to discuss the controversies surrounding each study, and provide heuristic explanations that are also broadly accessible. Murnane and Willett offer strong methodological insights on causal inference, while also examining the consequences of a wide variety of educational policies implemented in the U.S. and abroad. Representing a unique contribution to the literature surrounding educational research, this landmark text will be invaluable for students and researchers in education and public policy, as well as those interested in social science.

Author Biography


Richard J. Murnane, Juliana W. and William Foss Thompson Professor of Education and Society at Harvard University, is an economist who focuses his research on the relationships between education and the economy, teacher labor markets, the determinants of children's achievement, and strategies for making schools more effective.

John B. Willett, Charles William Eliot Professor of Education at Harvard University, is a quantitative methodologist who has devoted his career to improving the research design and data-analytic methods used in education and the social sciences, with a particular emphasis on the design of longitudinal research and the analysis of longitudinal data .

Table of Contents

Prefacep. xi
The Challenge for Educational Researchp. 3
The Long Questp. 3
The Quest Is Worldwidep. 9
What This Book Is Aboutp. 10
What to Read Nextp. 12
The Importance of Theoryp. 14
What Is Theory?p. 15
Theory in Educationp. 19
Voucher Theoryp. 21
What Kind of Theories?p. 24
What to Read Nextp. 24
Designing Research to Address Causal Questionsp. 26
Conditions to Strive for in All Researchp. 27
Making Causal Inferencesp. 29
Past Approaches to Answering Causal Questions in Educationp. 31
The Key Challenge of Causal Researchp. 33
What to Read Nextp. 39
Investigator-Designed Randomized Experimentsp. 40
Conducting Randomized Experimentsp. 41
The Potential Outcomes Frameworkp. 41
An Example of a Two-Group Experimentp. 45
Analyzing Data from Randomized Experimentsp. 48
The Better Your Research Design, the Simpler Your Data Analysisp. 48
Bias and Precision in the Estimation of Experimental Effectsp. 52
What to Read Nextp. 60
Challenges in Designing, Implementing, and Learning from Randomized Experimentsp. 61
Critical Decisions in the Design of Experimentsp. 62
Defining the Treatmentp. 64
Defining the Population from Which Participants Will Be Sampledp. 66
Deciding Which Outcomes to Measurep. 67
Deciding How Long to Track Participantsp. 68
Threats to the Validity of Randomized Experimentsp. 69
Contamination of the Treatment-Control Contrastp. 70
Cross-oversp. 70
Attrition from the Samplep. 71
Participation in an Experiment Itself Affects Participants' Behaviorp. 73
Gaining Support for Conducting Randomized Experiments: Examples from Indiap. 74
Evaluating an Innovative Input Approachp. 75
Evaluating an Innovative Incentive Policyp. 79
What to Read Nextp. 81
Statistical Power and Sample Sizep. 82
Statistical Powerp. 83
Reviewing the Process of Statistical Inferencep. 83
Defining Statistical Powerp. 92
Factors Affecting Statistical Powerp. 96
The Strengths and Limitations of Parametric Testsp. 101
The Benefits of Covariatesp. 102
The Reliability of the Outcome Measure Mattersp. 103
The Choice Between One-Tailed and Two-Tailed Testsp. 105
What to Read Nextp. 106
Experimental Research When Participants Are Clustered Within Intact Groupsp. 107
Random-Intercepts Multilevel Model to Estimate Effect Size When Intact Groups Are Randomized to Experimental Conditionsp. 110
Statistical Power When Intact Groups of Participants Are Randomized to Experimental Conditionsp. 120
Statistical Power of the Cluster-Randomized Design and Intraclass Correlationp. 122
Fixed-Effects Multilevel Models to Estimate Effects Size When Intact Groups of Participants Are Randomized to Experimental Conditionsp. 128
Specifying a Fixed-Effects Multilevel Modelp. 128
Choosing Between Random-and Fixed-Effects Specificationsp. 131
What to Read Nextp. 134
Using Natural Experiments to Provide "Arguably Exogenous" Treatment Variabilityp. 135
Natural- and Investigator-Designed Experiments: Similarities and Differencesp. 136
Two Examples of Natural Experimentsp. 137
The Vietnam-Era Draft Lotteryp. 137
The Impact of an Offer of Financial Aid for Collegep. 141
Sources of Natural Experimentsp. 145
Choosing the Width of the Analytic Windowp. 150
Threats to Validity in Natural Experiments with a Discontinuity Designp. 152
Accounting for the Relationship Between the Outcome and the Forcing Variable in a Discontinuity Designp. 153
Actions by Participants Can Undermine Exogenous Assignment to Experimental Conditions in a Natural Experiment with a Discontinuity Designp. 163
What to Read Nextp. 164
Estimating Causal Effects Using a Regression-Discontinuity Approachp. 165
Maimonides' Rule and the Impact of Class Size on Student Achievementp. 166
A Simple First-Differences Analysisp. 170
A Difference-in-Differences Analysisp. 171
A Basic Regression-Discontinuity Analysisp. 174
Choosing an Appropriate Bandwidthp. 181
Generalizing the Relationship Between the Outcome and the Forcing Variablep. 186
Specification Checks Using Pseudo-Outcomes and Pseudo-Cut-Offsp. 192
Regression-Discontinuity Designs and Statistical Powerp. 195
Additional Threats to Validity in a Regression-Discontinuity Designp. 197
What to Read Nextp. 202
Introducing Instrumental-Variables Estimationp. 203
Introducing Instrumental-Variables Estimationp. 204
Bias in the OLS Estimate of the Causal Effect of Education on Civic Engagementp. 206
Instrumental-Variable Estimationp. 215
Two Critical Assumptions That Underpin Instrumental-Variables Estimationp. 223
Alternative Ways of Obtaining the Instrumental-Variables Estimatep. 226
Obtaining an Instrumental-Variables Estimate by the Two-Stage Least-Squares Methodp. 227
Obtaining an Instrumental-Variables Estimate by Simultaneous-Equations Estimationp. 233
Extensions of the Basic Instrumental-Variable Estimation Approachp. 238
Incorporating Exogenous Covariates into Instrumental-Variable Estimationp. 238
Incorporating Multiple Instruments into the First-Stage Modelp. 243
Examining the Impact of Interactions Between the Endogenous Question Predictor and Exogenous Covariates in the Second-Stage Modelp. 247
Choosing Appropriate Functional Forms for Outcome/Predictor Relationships in First- and Second-Stage Modelsp. 251
Finding and Defending Instrumentsp. 252
Proximity of Educational Institutionsp. 253
Institutional Rules and Personal Characteristicsp. 257
Deviations from Cohort Trendsp. 261
The Search Continuesp. 263
What to Read Nextp. 264
Using IVE to Recover the Treatment Effect in a Quasi-Experimentp. 265
The Notion of a "Quasi-Experiment"p. 267
Using IVE to Estimate the Causal Impact of a Treatment in a Quasi-Experimentp. 269
Further Insight into the IVE (LATE) Estimate, in the Context of Quasi-Experimental Datap. 274
Using IVE to Resolve "Fuzziness" in a Regression-Discontinuity Designp. 280
What to Read Nextp. 285
Dealing with Bias in Treatment Effects Estimated from Nonexperimental Datap. 286
Reducing Observed Bias by the Method of Stratificationp. 289
Stratifying on a Single Covariatep. 289
Stratifying on Covariatesp. 299
Reducing Observed Bias by Direct Control for Covariates Using Regression Analysisp. 304
Reducing Observed Bias Using a Propensity-Score Approachp. 310
Estimation of the Treatment Effect by Stratifying on Propensity Scoresp. 316
Estimation of the Treatment Effect by Matching on Propensity Scoresp. 321
Estimation of the Treatment Effect by Weighting by the Inverse of the Propensity Scoresp. 324
A Return to the Substantive Questionp. 328
What to Read Nextp. 331
Methodological Lessons from the Long Questp. 332
Be Clear About Your Theory of Actionp. 333
Learn About Culture, Rules, and Institutions in the Research Settingp. 335
Understand the Counterfactualp. 337
Always Worry About Selection Biasp. 338
Use Multiple Outcome Measuresp. 340
Be on the Lookout for Longer-Term Effectsp. 341
Develop a Plan for Examining Impacts on Subgroupsp. 342
Interpret Your Research Results Correctlyp. 344
Pay Attention to Anomalous Resultsp. 346
Recognize That Good Research Always Raises New Questionsp. 348
What to Read Nextp. 349
Substantive Lessons and New Questionsp. 350
Lower the Cost of School Enrollmentp. 352
Reduce Commuting Timep. 352
Reduce Out-of-Pocket Educational Costsp. 353
Reduce Opportunity Costsp. 354
Change Children's Daily Experiences in Schoolp. 355
More Books?p. 355
Smaller Classes?p. 356
Better Teaching?p. 357
Improve Incentivesp. 359
Improve Incentives for Teachersp. 359
Improve Incentives for Studentsp. 361
Create More Schooling Options for Poor Childrenp. 364
New Private-School Optionsp. 365
New Public-School Optionsp. 367
Summing Upp. 367
Final Wordsp. 368
Referencesp. 369
Indexp. 381
Table of Contents provided by Ingram. All Rights Reserved.

Rewards Program

Write a Review