did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780132553322

Program Evaluation: Methods and Case Studies

by ;
  • ISBN13:

    9780132553322

  • ISBN10:

    0132553325

  • Edition: 5th
  • Format: Hardcover
  • Copyright: 1996-06-01
  • Publisher: Taylor & Francis
  • View Upgraded Edition

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $71.00 Save up to $17.75
  • Buy Used
    $53.25
    Add to Cart Free Shipping Icon Free Shipping

    USUALLY SHIPS IN 2-4 BUSINESS DAYS

Supplemental Materials

What is included with this book?

Summary

Program Evaluation, Applied Research Methods, and similar courses in Psychology, Sociology, Nursing, Political Science, Education, Human Resources at the advanced undergraduate or master's levels. This volume provides a comprehensive yet accessible introduction to the skills, attitudes and methods required to evaluate programs offered in public and private organizations. The authors stress the development of a program-improvement focus that begins with program conceptualization through implementation and analysis of outcomes and costs. They also provide detailed descriptions of methods for improving program evaluation reports and encouraging utilization.

Table of Contents

Preface xiii
Program Evaluation: An Overview
1(21)
Evaluation Tasks That Need to Be Done
3(4)
Devote Resources to Meeting Unmet Needs
4(1)
Verify That Planned Programs Do Provide Services
4(1)
Examine the Results
4(1)
Determine Which Services Produce the Best Results
5(1)
Select the Types of Programs That Offer the Most Needed Services
5(1)
Provide Information Needed to Maintain and Improve Quality
6(1)
Watch for Unplanned Side Effects
6(1)
Common Types of Program Evaluations
7(3)
The Evaluation of Need
7(1)
The Evaluation of Process
7(1)
The Evaluation of Outcome
8(1)
The Evaluation of Efficiency
9(1)
Activities Often Confused with Program Evaluation
10(1)
Different Types of Evaluations for Different Kinds of Programs
11(2)
Organizations Needing Program Evaluations
11(1)
Different Types of Needs
12(1)
Levels of Program Sponsorship
13(1)
Purpose of Program Evaluation
13(2)
The Roles of Evaluators
15(5)
Work Settings
15(1)
Consultants Compared to Internal Evaluators
16(2)
Compatibility of Evaluation and Service
18(1)
Evaluation and Other Activities of Organizations
19(1)
Summary and Preview
20(1)
Study Questions
20(1)
Additional Resources
21(1)
Planning an Evaluation
22(20)
An Overview of Evaluation Models
23(4)
The Traditional Model
23(1)
Social Science Research Model
24(1)
Industrial Inspection Model
24(1)
Black Box Evaluation
24(1)
Objectives-Based Evaluation
25(1)
Goal-Free Evaluation
25(1)
Fiscal Evaluation
25(1)
Accountability Model
26(1)
Expert Opinion Model
26(1)
Naturalistic Model
26(1)
An Improvement-Focused Model
27(1)
Steps in Preparing to Conduct an Evaluation
27(9)
Identify the Program and Its Stakeholders
28(1)
Become Familiar with Information Needs
29(4)
Planning an Evaluation
33(3)
Dysfunctional Attitudes Toward Program Evaluation
36(5)
Expectations of a ``Slam-Bang'' Effect
36(1)
Inappropriate Pressure from Stakeholders
37(1)
Worry That Asking About Program Quality Is Unprofessional
37(1)
Fear That Evaluation Will Inhibit Innovation
37(1)
Fear That the Program Will Be Terminated
38(1)
Fear That Information Will Be Misused
39(1)
Fear That Qualitative Understanding May Be Supplanted
39(1)
Fear That Evaluation Drains Program Resources
40(1)
Fear of Losing Control of the Program
40(1)
Fear That Evaluation Has Little Impact
40(1)
Summary and Preview
41(1)
Study Questions
41(1)
Additional Resources
41(1)
Selecting Criteria and Setting Standards
42(23)
Importance of Selecting Criteria and Setting Standards
43(3)
Criteria That Reflect the Program's Intent
44(1)
Criteria That the Staff Can Influence
45(1)
Criteria That Can Be Measured Reliably
45(1)
Criteria That the Stakeholders Participate in Selecting
46(1)
Developing Goals and Objectives
46(5)
How Much Agreement of Goals Is Needed?
47(1)
Different Types of Goals
48(2)
Goals That Apply to All Programs
50(1)
Evaluation Criteria and Evaluation Questions
51(11)
Does the Program or Plan Match the Values of the Stakeholders?
51(1)
Does the Program or Plan Match the Needs of the People to Be Served?
52(1)
Does the Program as Implemented Fulfill the Plans?
52(2)
Do the outcomes achieved match the goals?
54(2)
Is There Support for the Program Theory?
56(2)
Is the Program Accepted?
58(2)
Are the Resources Devoted to the Program Being Expended Appropriately?
60(2)
Some Practical Limitations on Selecting Evaluation Criteria
62(2)
Evaluation Budget
62(1)
Time Available for the Project
63(1)
Criteria That Are Credible to Stakeholders
63(1)
Summary and Preview
64(1)
Study Questions
64(1)
Additional Resources
64(1)
Developing Measures
65(20)
Sources of Data for Evaluating
65(7)
Recipients of Services
65(3)
Providers of Services
68(1)
Observers
69(1)
Which Sources Should Be Used?
70(2)
Qualities of Good Assessment Procedures
72(6)
Multiple Variables
72(1)
Using Multiple Measures in an Evaluation of a Summer Community Program for Youth
73(1)
Nonreactive Measures
74(1)
Important Variables
74(1)
Valid Measures
74(1)
Reliable Measures
75(2)
Sensitivity to Change
77(1)
Cost-Effective Measures
78(1)
Types of Measures of Evaluation Criteria
78(3)
Written Surveys and Interviews with Program Participants
78(2)
Checklists, Tests, and Records
80(1)
Preparing Special Surveys
81(3)
Format of a Survey
81(1)
Preparing Survey Items
82(1)
Instructions and Pretests
83(1)
Summary and Preview
84(1)
Study Questions
84(1)
Additional Resources
84(1)
Ethics in Program Evaluation
85(17)
Standards for the Practice of Evaluation
86(1)
Ethical Issues Involved in the Treatment of People
87(2)
Assignment to Program Groups
87(1)
Informed Consent
88(1)
Confidentiality
89(1)
Role Conflicts Facing Evaluators
89(2)
Recognizing the Different Needs of Stakeholders
91(2)
Program Managers Are Concerned with Efficiency
91(1)
Staff Members Seek Assistance in Service Delivery
92(1)
Clients Want Effective and Appropriate Services
92(1)
Community Members Want Cost-Effective Programs
93(1)
The Validity of Evaluations
93(3)
Valid Measurement Instruments
93(1)
Skilled Data Collectors
94(1)
Appropriate Research Design
95(1)
Adequate Descriptions of Program and Procedures
95(1)
Avoiding Possible Negative Side Effects of Evaluation Procedures
96(4)
Can Someone Be Hurt by Inaccurate Findings?
96(1)
Statistical Type II Errors
97(1)
Pay Attention to Unplanned Effects
98(1)
Implicit Values Held by the Evaluator
99(1)
Ethical Problems Evaluators Report
100(1)
Summary and Preview
100(1)
Study Questions
101(1)
Additional Resources
101(1)
The Assessment of Need
102(19)
Definitions of Need
103(1)
Sources of Information for the Assessment of Need
104(11)
Describing the Current Situation
105(1)
Social Indicators of Need
106(1)
Community Surveys of Need
107(3)
Residents Being Served
110(1)
Key Informants
111(1)
Focus Groups and Community Forums
112(3)
Inadequate Assessment of Need
115(3)
Failing to Examine Need
115(1)
Failing to Examine the Context of Need
116(1)
Failing to Relate Need to Implementation Plans
117(1)
Failing to Deal with Ignorance of Need
117(1)
Using Need Assessments in Program Planning
118(1)
Summary and Preview
119(1)
Study Questions
119(1)
Additional Resources
120(1)
Monitoring the Operation of Programs
121(21)
Monitoring Programs as a Means of Evaluating Programs
122(2)
What to Summarize with Information Systems
124(1)
Relevant Information
124(1)
Actual State of Program
124(1)
Program Participants
125(1)
Providers of Services
125(1)
Program Records and Information Systems
125(3)
Problems with Agency Records
126(1)
Increasing the Usefulness of Records
126(1)
How Records Can Be Used to Monitor Programs
126(2)
Following the Course of Service
128(8)
Threatening Uses of Information Systems
136(3)
Avoiding Common Problems in Implementing an Information System
139(1)
Avoid Serving the Needs of Only One Group
139(1)
Avoid Duplicating Records
139(1)
Avoid a Focus on Technology
139(1)
Summary and Preview
140(1)
Study Questions
140(1)
Additional Resources
141(1)
Single Group, Nonexperimental Outcome Evaluations
142(18)
Single-Group Evaluation Designs
142(1)
Posttest Only
142(1)
Pretest-Posttest
143(1)
Uses of Single-Group, Descriptive Designs
143(5)
Did the Participants Meet a Criterion?
143(1)
Did the Participants Improve?
144(1)
A Pretest-Posttest Design to Evaluate a Peer-Based Program to Prevent Skin Cancer
144(1)
Did the Participants Improve Enough?
144(1)
Relating Change to Service Intensity and Participant Characteristics
145(3)
Threats to Internal Validity
148(6)
Actual but Nonprogram-Related Changes in the Participants
148(1)
Apparent Changes Dependent on Who Was Observed
149(3)
Changes Related to Methods of Obtaining Observations
152(1)
Effects of Interactions of These Threats
153(1)
Internal Validity Threats Are Double-Edged Swords
153(1)
Construct Validity in Pretest-Posttest Designs
154(1)
Overinterpreting the Results of Single-Group Designs
155(1)
Usefulness of Single-Group Designs as Initial Approaches to Program Evaluation
156(2)
Assessing the Usefulness of Further Evaluations
156(1)
Correlating Improvement with Other Variables
157(1)
Preparing the Facility for Further Evaluation
157(1)
Summary and Preview
158(1)
Study Questions
158(1)
Additional Resources
159(1)
Quasi-Experimental Approaches to Outcome Evaluation
160(21)
Making Observations at a Greater Number of Intervals
161(6)
Time-Series Designs
163(3)
Analysis of Time-Series Designs
166(1)
Observing Other Groups
167(4)
Nonequivalent Control Group Designs
167(2)
Problems in Selecting Comparison Groups
169(2)
Regression-Discontinuity Design
171(3)
Nonequivalent Control Groups Used to Evaluate an Employee Incentive Plan
172(2)
Observing Other Dependent Variables
174(1)
Combining Designs to Increase Internal Validity
175(4)
Time-Series and Nonequivalent Control Groups
175(1)
Selective Control Design
176(3)
Summary and Preview
179(1)
Study Questions
179(1)
Additional Resources
180(1)
Using Experiments to Evaluate Programs
181(13)
Experiments in Program Evaluation
181(2)
Benefits of Experiments
181(1)
Experimental Designs
182(1)
Objections to Experimentation
183(3)
Don't Experiment on Me!
183(1)
We Already Know What Is Best
184(1)
I Know What Is Best for My Client
184(1)
Experiments Are Just Too Much Trouble
185(1)
The Most Desirable Times to Conduct Experiments
186(3)
When a New Program Is Introduced
186(1)
When Stakes Are High
187(1)
When There Is Controversy About Program Effectiveness
187(1)
Teaching Doctors Communication Skills: An Evaluation with Random Assignment and Pretests
188(1)
When Policy Change Is Desired
189(1)
When Demand Is High
189(1)
Preserving an Experimental Design
189(1)
Precautions Before Data Collection
189(1)
Precautions While the Experiment Is in Progress
190(2)
Summary and Preview
192(1)
Study Questions
192(1)
Additional Resources
193(1)
Analysis of Costs and Outcomes
194(19)
Cost Analyses and Budgets
195(3)
Types of Costs
195(2)
An Example Budget
197(1)
The Necessity of Examining Costs
198(1)
Comparing Outcomes to Costs
198(5)
The Essence of Cost-Benefit Analysis
199(2)
The Essence of Cost-Effectiveness Analysis
201(1)
When Outcomes Cannot Be Put into the Same Units
202(1)
Some Details of Cost Analyses
203(4)
Units of Analysis
203(1)
Future Costs and Benefits
204(1)
Who Pays the Costs and Who Reaps the Benefits?
205(2)
Using Cost-Benefit and Cost-Effectiveness Analyses
207(1)
Major Criticisms of Cost Analyses
207(4)
Psychological Benefits
207(1)
The Value of Lives
208(1)
Does Such Thinking Devalue Lives?
209(1)
Cost-Benefit and Cost-Effectiveness Analyses Require Many Assumptions
209(1)
The Value of Providing Smoking Cessation Clinics for Employees on Company Time
210(1)
Summary and Preview
211(1)
Study Questions
211(1)
Additional Resources
212(1)
Qualitative Evaluation Methods
213(19)
Evaluation Settings Best Served by Qualitative Methods
214(3)
Admission to Graduate School
215(1)
Dissatisfaction with a Library Collection
215(1)
Evaluating a Political Campaign
216(1)
Gathering Qualitative Information
217(7)
The Central Importance of the Observer
217(1)
Observational Methods
218(3)
Using Qualitative Methods in an Evaluation of a University Library
221(1)
Interviewing to Obtain Qualitative Information
221(3)
Carrying Out Naturalistic Evaluations
224(3)
Phase One: Making Unrestricted Observations
224(1)
Phase Two: Integrating Impressions
225(1)
Phase Three: Sharing Interpretations
225(1)
Phase Four: Preparing Reports
225(1)
Are Qualitative Evaluations Subjective?
226(1)
Coordinating Qualitative and Quantitative Methods
227(2)
The Substance of the Evaluation
227(1)
Changing Emphases As Understanding Expands
228(1)
The Evaluation Questions
228(1)
Cost of Evaluation
229(1)
Philosophical Assumptions
229(1)
Summary and Preview
230(1)
Study Questions
230(1)
Additional Resources
231(1)
Evaluation Reports: Interpreting and Communicating Findings
232(17)
Developing a Communication Plan
233(2)
Explore Stakeholder Information Needs
233(1)
Plan Reporting Meetings
233(2)
Communication and the Organizational Calendar
235(1)
Personal Presentations of Findings
235(4)
Need for Personal Presentations
235(1)
Content of Personal Presentations
236(2)
Audience for the Personal Presentation
238(1)
Distributing Drafts of Reports
239(1)
Content of Formal Written Evaluation Reports
239(8)
Purposes of the Formal Report
239(1)
Report Outline
240(1)
Describe the Context of the Evaluation
240(2)
Describe the Program Participants
242(1)
Justify the Criteria Selected
242(1)
Describe the Data Gathering Procedures
242(1)
Provide the Findings
243(2)
Develop Recommendations
245(1)
Appearance of Formal Reports
246(1)
Progress Reports and Press Releases
247(1)
Summary and Preview
247(1)
Study Questions
247(1)
Additional Resources
248(1)
How to Encourage Utilization
249(15)
Obstacles to Effective Utilization
249(3)
Constraints on Managers
249(1)
Value Conflicts Among Stakeholders
250(1)
Misapplied Methodology
251(1)
Evaluating a Program at Arm's Length
251(1)
Dealing with Mixed Findings
252(2)
Don't Abdicate Your Responsibility
252(1)
Don't Take the Easy Way Out
253(1)
Use the Evaluation to Improve the Program
253(1)
Using Evaluations Showing No Program Effects
254(2)
When Can Evaluators Be Sure Groups Do Not Differ?
254(1)
Programs Are Often Valuable Even When There Is No Effect
255(1)
Evaluations of the Outcomes of Boot Camp Prisons: The Value of Finding No Differences Between Program and Comparison Groups
256(1)
Developing a Learning Culture
256(4)
Work with Stakeholders
256(1)
Adopt Developmental Interpretations
257(1)
Frame Findings in Terms of Improvements
258(1)
Treat Findings As Hypotheses
259(1)
Recognize Service Providers' Needs
259(1)
Keep Evaluation Findings on the Agency's Agenda
260(1)
Evaluation Attitude
260(2)
Summary and Possible Trends for Program Evaluation
262(1)
Study Questions
263(1)
Additional Resources
263(1)
Appendix: Illustrative Evaluation Report 264(8)
References 272(22)
Name Index 294(6)
Subject Index 300

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program