9780765613660

Public Program Evaluation: A Statistical Guide

by ;
  • ISBN13:

    9780765613660

  • ISBN10:

    0765613662

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 2006-12-15
  • Publisher: Routledge
  • Purchase Benefits
  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $175.00 Save up to $149.58
  • Buy New
    $148.75
    Add to Cart Free Shipping

    USUALLY SHIPS IN 3-5 BUSINESS DAYS

Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

Summary

This readable and comprehensive text is designed to equip students and practitioners with the statistical skills needed to meet government standards regarding public program evaluation. Even those with little statistical training will find the explanations clear, with many illustrative examples, case studies, and applications. Far more than a collection of statistical techniques, the book begins with chapters on the overall context for successful program evaluations, and carefully explains statistical methods--and threats to internal and statistical validity--that correspond to each evaluation design. The authors then present a variety of methods for program analysis, and advise readers on how to select the mix of methods most appropriate for the issues they deal with-- always balancing methodology with the need for generality, the size of the evaluator's budget, the availability of data, and the need for quick results. Among this text's many important features: --maintains a practical focus on doing evaluation --integrates research design with corresponding statistical/econometric estimation methods --uses examples from many policy fields, not just social services --uses examples from domestic programs as well as developing countries --links program evaluation to the larger field of policy analysis.

Author Biography

Laura Langbein is a professor in the Department of Public Administration and Policy in the School of Public Affairs at American University in Washington, DC.

Table of Contents

Preface ix
What This Book Is About
3(18)
What Is Program Evaluation?
3(5)
Types of Program Evaluations
8(5)
Basic Characteristics of Program Evaluation
13(2)
Relation of Program Evaluation to General Field of Policy Analysis
15(1)
Assessing Government Performance: Program Evaluation and GPRA
15(1)
A Brief History of Program Evaluation
16(2)
What Comes Next
18(1)
Key Concepts
19(1)
Do It Yourself
20(1)
Performance Measurement and Benchmarking
21(12)
Program Evaluation and Performance Measurement: What Is the Difference?
21(3)
Benchmarking
24(1)
Reporting Performance Results
25(2)
Conclusion
27(2)
Do It Yourself
29(4)
Defensible Program Evaluations: Four Types of Validity
33(23)
Defining Defensibility
33(1)
Types of Validity: Definitions
34(1)
Types of Validity: Threats and Simple Remedies
35(19)
Basic Concepts
54(1)
Do It Yourself
55(1)
Internal Validity
56(20)
The Logic of Internal Validity
56(3)
Making Comparisons: Cross Sections and Time Series
59(1)
Threats to Internal Validity
60(8)
Summary
68(1)
Three Basic Research Designs
69(2)
Rethinking Validity: The Causal Model Workhorse
71(2)
Basic Concepts
73(1)
Do It Yourself
74(1)
A Summary of Threats to Internal Validity
74(2)
Randomized Field Experiments
76(30)
Basic Characteristics
76(1)
Brief History
77(1)
Caveats and Cautions About Randomized Experiments
78(4)
Types of RFEs
82(12)
Issues in Implementing RFEs
94(4)
Threats to the Validity of RFEs: Internal Validity
98(3)
Threats to the Validity of RFEs: External Validity
101(2)
Threats to the Validity of RFEs: Measurement and Statistical Validity
103(1)
Conclusion
103(1)
Three Cool Examples of RFEs
104(1)
Basic Concepts
104(1)
Do It Yourself: Design a Randomized Field Experiment
105(1)
The Quasi Experiment
106(28)
Defining Quasi-Experimental Designs
106(1)
The One-Shot Case Study
107(2)
The Posttest-Only Comparison-Group (PTCG) Design
109(6)
The Pretest Posttest Comparison-Group (The Nonequivalent Control-Group) Design
115(3)
The Pretest Posttest (Single-Group) Design
118(2)
The Single Interrupted Time-Series Design
120(7)
The Interrupted Time-Series Comparison-Group (ITSCG) Design
127(3)
The Multiple Comparison-Group Time-Series Design
130(1)
Summary of Quasi-Experimental Design
131(1)
Basic Concepts
132(1)
Do It Yourself
133(1)
The Nonexperimental Design: Variations on the Multiple Regression Theme
134(58)
What Is a Nonexperimental Design?
134(1)
Back to the Basics: The Workhorse Diagram
135(1)
The Nonexperimental Workhorse Regression Equation
136(2)
Data for the Workhorse Regression Equation
138(3)
Interpreting Multiple Regression Output
141(14)
Assumptions Needed to Believe That b Is a Valid Estimate of B [E(b) = B]
155(18)
Assumptions Needed to Believe the Significance Test for b
173(6)
What Happened to the R2?
179(1)
Conclusion
180(1)
Basic Concepts
180(3)
Introduction to Stata
183(4)
Do It Yourself: Interpreting Nonexperimental Results
187(5)
Designing Useful Surveys for Evaluation
192(28)
The Response Rate
193(7)
How to Write Questions to Get Unbiased, Accurate, Informative Responses
200(7)
Turning Responses into Useful Information
207(8)
For Further Reading
215(1)
Basic Concepts
216(1)
Do It Yourself
217(3)
Summing It Up: Meta-Analysis
220(7)
What Is Meta-Analysis?
220(1)
Example of a Meta-Analysis: Data
221(1)
Example of a Meta-Analysis: Variables
222(1)
Example of a Meta-Analysis: Data Analysis
223(1)
The Role of Meta-Analysis in Program Evaluation and Causal Conclusions
224(1)
For Further Reading
225(2)
Notes 227(22)
Index 249(12)
About the Authors 261

Rewards Program

Write a Review