CART

(0) items

Evaluation : A Systematic Approach,9780761908937
This item qualifies for
FREE SHIPPING!

FREE SHIPPING OVER $59!

Your order must be $59 or more, you must select US Postal Service Shipping as your shipping preference, and the "Group my items into as few shipments as possible" option when you place your order.

Bulk sales, PO's, Marketplace Items, eBooks, Apparel, and DVDs not included.

Evaluation : A Systematic Approach

by
Edition:
6th
ISBN13:

9780761908937

ISBN10:
0761908935
Format:
Hardcover
Pub. Date:
1/1/1999
Publisher(s):
SAGE PUBLICATIONS INC
List Price: $81.95

Buy New Textbook

Usually Ships in 3-5 Business Days
N9780761908937
$79.90

Rent Textbook

We're Sorry
Sold Out

Used Textbook

We're Sorry
Sold Out

eTextbook

We're Sorry
Not Available

More New and Used
from Private Sellers
Starting at $7.59
See Prices

Questions About This Book?

What version or edition is this?
This is the 6th edition with a publication date of 1/1/1999.
What is included with this book?
  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.

Related Products


  • Evaluation : A Systematic Approach
    Evaluation : A Systematic Approach




Summary

The book that has been a benchmark in evaluation has been further improved and updated. Relied on by over 90,000 readers as the text on how to design, implement and appraise the utility of social programmes, the Sixth Edition of Evaluation has been completely revised to include the latest techniques and approaches, as well as guidelines for how evaluations should be tailored to fit programmes and social contexts.

Table of Contents

Preface ix
1 Programs, Policies, and Evaluations
3(34)
What Is Evaluation Research?
4(5)
A Brief History of Evaluation Research
9(11)
An Overview of Program Evaluation
20(7)
Evaluation Research in Practice
27(6)
Who Can Do Evaluations?
33(2)
Summary
35(2)
2 Tailoring Evaluations
37(42)
What Aspects of the Evaluation Plan Must Be Tailored?
38(1)
What Considerations Should Guide Evaluation Planning?
39(15)
The Nature of the Evaluator-Stakeholder Relationship
54(8)
Evaluation Questions and Evaluation Methods
62(12)
Stitching It All Together
74(2)
Summary
76(3)
3 Identifying Issues and Formulating Questions
79(40)
What Makes a Good Evaluation Question?
81(7)
Determining the Questions on Which the Evaluation Should Focus
88(27)
Collating Evaluation Questions and Setting Priorities
115(1)
Summary
116(3)
4 Assessing the Need for a Program
119(36)
The Role of Evaluators in Diagnosing Social Conditions and Service Needs
120(5)
Defining Social Problems
125(1)
Specifying the Extent of the Problem: When, Where, and How Big?
126(11)
Defining and Identifying the Targets of Interventions
137(9)
Describing the Nature of Service Needs
146(5)
Summary
151(4)
5 Expressing and Assessing Program Theory
155(36)
The Evaluability Assessment Perspective
157(3)
Eliciting and Expressing Program Theory
160(13)
Assessing Program Theory
173(14)
Summary
187(4)
6 Monitoring Program Process and Performance
191(44)
What Is Program Monitoring?
192(11)
Perspectives on Program Monitoring
203(4)
Monitoring Service Utilization
207(7)
Monitoring Organizational Functions
214(6)
Monitoring Program Outcomes
220(5)
Collecting Data for Monitoring
225(4)
Analysis of Monitoring Data
229(2)
Summary
231(4)
7 Strategies for Impact Assessment
235(44)
Key Concepts in Impact Assessment
236(5)
Extraneous Confounding Factors
241(3)
Design Effects
244(13)
Design Strategies for Isolating the Effects of Extraneous Factors
257(3)
A Catalog of Impact Assessment Designs
260(8)
Judgmental Approaches to Impact Assessment
268(1)
Quantitative Versus Qualitative Data in Impact Assessments
269(2)
Inference Validity Issues in Impact Assessment
271(3)
Choosing the Right Impact Assessment Strategy
274(1)
Summary
275(4)
8 Randomized Designs for Impact Assessment
279(30)
Units of Analysis
279(1)
Experiments as an Impact Assessment Strategy
280(12)
Analyzing Randomized Experiments
292(5)
Limitations on the Use of Randomized Experiments
297(8)
Summary
305(4)
9 Quasi-Experimental Impact Assessments
309(34)
Quasi-Experimental Impact Assessment
309(4)
Constructing Comparison Groups in Quasi-Experimental Evaluations
313(19)
Some Cautions in Using Constructed Controls
332(8)
Summary
340(3)
10 Assessment of Full-Coverage Programs
343(22)
Nonuniform Full-Coverage Programs
344(3)
Reflexive Controls
347(9)
Shadow Controls
356(7)
Summary
363(2)
11 Measuring Efficiency
365(32)
Key Concepts in Efficiency Analysis
367(7)
Methodology of Cost-Benefit Analysis
374(16)
Cost-Effectiveness Analysis
390(4)
Summary
394(3)
12 The Social Context of Evaluation
397(44)
The Purposefulness of Evaluation Activities
398(2)
The Social Ecology of Evaluations
400(17)
The Profession of Evaluation
417(8)
Evaluation Standards, Guidelines, and Ethics
425(6)
Utilization of Evaluation Results
431(5)
Epilogue
436(3)
Summary
439(2)
Glossary 441(10)
References 451(26)
Author Index 477(6)
Subject Index 483(16)
About the Authors 499


Please wait while the item is added to your cart...