did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780132275606

Program Evaluation: Methods And Case Studies

by
  • ISBN13:

    9780132275606

  • ISBN10:

    0132275600

  • Edition: 7th
  • Format: Hardcover
  • Copyright: 2011-01-01
  • Publisher: Taylor & Francis
  • View Upgraded Edition

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $123.20 Save up to $34.50
  • Buy Used
    $88.70
    Add to Cart Free Shipping Icon Free Shipping

    USUALLY SHIPS IN 24-48 HOURS

Supplemental Materials

What is included with this book?

Summary

Comprehensive yet accessible, this book provides a practical introduction to the skills, attitudes, and methods required to assess the worth and value of human services offered in public and private organizations in a wide range of fields. Readers are introduced to the need for such activities, the methods for carrying out evaluations, and the essential steps in organizing findings into reports. The book focuses on smaller projects carried out by an internal evaluator (i.e., on the work of people who areclosely associatedwith the service to be evaluated), and is designed to help program planners, developers, and evaluators to work with program staff members who might bethreatenedby program evaluation. Features case studies and short profiles of individual program evaluators engaged in conducting evaluations in private service agencies, foundations, universities, and federal, state, and local governments.Program Evaluation: An Overview. Planning an Evaluation. Selecting Criteria and Setting Standards. Developing Measures. Ethics in Program Evaluation. The Assessment of Need. Monitoring the Operation of Programs. Single Group, Nonexperimental Outcome Evaluations. Quasi-Experimental Approaches to Outcome Evaluation. Using Experiments to Evaluate Programs. Analysis of Costs and Outcomes. Qualitative Evaluation Methods. Evaluation Reports: Interpreting and Communicating Findings. How to Encourage Utilization.For Program Evaluators, Program Planners, Program Administrators, Public Administrators in all types of human services--Criminal Justice, Corrections, Public Health, Public Administration, Community Nursing, Educational Administration, Substance Abuse Program Administration, Social Work, etc.

Author Biography

Raymond G. Carey is principal of R. G. Carey Associates. Emil J. Posavac is Professor Emeritus of Psychology at Loyola University of Chicago.

Table of Contents

Preface xiv
1 Program Evaluation: An Overview
1(22)
EVALUATION TASKS THAT NEED TO BE DONE
3(4)
Verify That Resources Would Be Devoted to Meeting Unmet Needs
4(1)
Verify That Implemented Programs Do Provide Services
4(1)
Examine the Outcomes
4(1)
Determine Which Programs Produce the Most Favorable Outcomes
5(1)
Select the Programs That Offer the Most Needed Types of Services
5(1)
Provide Information to Maintain and Improve Quality
6(1)
Watch for Unplanned Side Effects
6(1)
COMMON TYPES OF PROGRAM EVALUATIONS
7(3)
Assess Needs of the Program Participants
7(1)
Examine the Process of Meeting the Needs
7(1)
Measure the Outcomes of the Program
8(1)
Integrate the Needs, Costs, and Outcomes
9(1)
ACTIVITIES OFTEN CONFUSED WITH PROGRAM EVALUATION
10(1)
DIFFERENT TYPES OF EVALUATIONS FOR DIFFERENT KINDS OF PROGRAMS
11(3)
Organizations Needing Program Evaluations
11(1)
Time Frames of Needs
12(1)
Extensiveness of Programs
13(1)
PURPOSE OF PROGRAM EVALUATION
14(2)
THE ROLES OF EVALUATORS
16(7)
A Variety of Work Settings
16(1)
Comparison of Internal and External Evaluators
17(2)
Evaluation and Service
19(1)
Evaluation and Related Activities of Organizations
20(1)
Summary and Preview
21(1)
Study Questions
22(1)
Additional Resource
22(1)
2 Planning an Evaluation
23(24)
AN OVERVIEW OF EVALUATION MODELS
24(6)
The Traditional Model
24(1)
Social Science Research Model
25(1)
Industrial Inspection Model
25(1)
Black Box Evaluation
25(1)
Objectives-Based Evaluation
26(1)
Goal-Free Evaluation
26(1)
Fiscal Evaluation
26(1)
Accountability Model
27(1)
Expert Opinion Model
27(1)
Naturalistic or Qualitative Model
27(1)
Success Case Method
28(1)
Empowerment Evaluation
28(1)
Theory-Driven Evaluation
28(1)
An Improvement-Focused Approach
29(1)
STEPS IN PREPARING TO CONDUCT AN EVALUATION
30(10)
Identify the Program and Its Stakeholders
30(1)
Become Familiar with Information Needs
31(4)
Plan the Evaluation
35(5)
DYSFUNCTIONAL ATTITUDES TOWARD PROGRAM EVALUATION
40(7)
Assume That the Program Is Perfect
41(1)
Fear That the Evaluation Will Offend the Staff
41(1)
Fear That the Evaluation Will Inhibit Innovation
42(1)
Fear That the Program Will Be Terminated
42(1)
Fear That Information Will Be Misused
42(1)
Fear That Qualitative Understanding May Be Supplanted
43(1)
Fear That Evaluation Drains Program Resources
43(1)
Fear of Losing Control of the Program
43(1)
Fear That Evaluation Has Little Impact
44(1)
The Effect of These Attitudes
45(1)
Summary and Preview 45 Study Questions
45(1)
Additional Resource
46(1)
3 Selecting Criteria and Setting Standards
47(24)
USEFUL CRITERIA AND STANDARDS
48(4)
Criteria That Reflect a Program's Purposes
48(1)
Criteria That the Staff Can Influence
49(1)
Criteria That Can Be Measured Reliably and Validly
50(1)
Criteria That Stakeholders Participate in Selecting
50(2)
DEVELOPING GOALS AND OBJECTIVES
52(3)
How Much Agreement on Goals Is Needed?
52(1)
Different Types of Goals
53(2)
Goals That Apply to All Programs
55(1)
EVALUATION CRITERIA AND EVALUATION QUESTIONS
55(13)
Does the Program or Plan Match the Values of the Stakeholders?
56(1)
Does the Program or Plan Match the Needs of the People to Be Served?
56(1)
Does the Program as Implemented Fulfill the Plans?
57(1)
Do the Outcomes Achieved Match the Goals?
58(2)
Using Program Theory
60(4)
Is the Program Accepted?
64(1)
Are the Resources Devoted to the Program Being Expended Appropriately?
65(3)
SOME PRACTICAL LIMITATIONS IN SELECTING EVALUATION CRITERIA
68(3)
Evaluation Budget
68(1)
Time Available for the Project
68(1)
Criteria That Are Credible to the Stakeholders
69(1)
Summary and Preview
69(1)
Study Questions
69(1)
Additional Resource
70(1)
4 Developing Measures
71(23)
SOURCES OF DATA FOR EVALUATION
71(8)
Intended Beneficiaries of the Program
72(2)
Providers of Services
74(1)
Observers
75(2)
Which Sources Should Be Used?
77(2)
GOOD ASSESSMENT PROCEDURES
79(7)
Use Multiple Variables
79(1)
Use Nonreactive Measures
80(1)
Use Variables Relevant to Information Needs
80(1)
CASE STUDY 1: USING MULTIPLE MEASURES IN AN EVALUATION OF A SUMMER COMMUNITY PROGRAM FOR YOUTH
81(1)
Use Valid Measures
81(1)
Use Reliable Measures
82(2)
Use Measures That Can Detect Change
84(1)
Use Cost-Effective Measures
85(1)
TYPES OF MEASURES OF EVALUATION CRITERIA
86(4)
Written Surveys and Interviews with Program Participants
86(2)
Checklists, Tests, and Records
88(2)
PREPARING SPECIAL SURVEYS
90(4)
Format of a Survey
90(1)
Preparing Survey Items
91(1)
Instructions and Pretests
92(1)
Summary and Preview
92(1)
Study Questions
92(1)
Additional Resource
93(1)
5 Ethics in Program Evaluation
94(19)
STANDARDS FOR THE PRACTICE OF EVALUATION
95(1)
ETHICAL ISSUES INVOLVED IN THE TREATMENT OF PEOPLE
96(2)
Compensating for Ineffective, Novel Treatments
96(1)
Obtaining Informed Consent
97(1)
Maintaining Confidentiality
98(1)
ROLE CONFLICTS FACING EVALUATORS
98(2)
RECOGNIZING THE NEEDS OF DIFFERENT STAKEHOLDERS
100(2)
Program Managers Are Concerned with Efficiency
100(1)
Staff Members Seek Assistance in Service Delivery
101(1)
Clients Want Effective and Appropriate Services
101(1)
Community Members Want Cost-Effective Programs
101(1)
THE VALIDITY OF EVALUATIONS
102(3)
Valid Measurement Instruments
102(1)
Skilled Data Collectors
103(1)
Appropriate Research Design
103(1)
Adequate Descriptions of Program and Procedures
104(1)
AVOIDING POSSIBLE NEGATIVE SIDE EFFECTS OF EVALUATION PROCEDURES
105(3)
Can Someone Be Hurt by Inaccurate Findings?
105(1)
Consider Statistical Type II Errors
106(1)
Pay Attention to Unplanned Effects
107(1)
Analyze Implicit Values Held by the Evaluator
107(1)
INSTITUTIONAL REVIEW BOARDS AND PROGRAM EVALUATION
108(2)
ETHICAL PROBLEMS EVALUATORS REPORT
110(3)
Summary and Preview
111(1)
Study Questions
111(1)
Additional Resource
112(1)
6 The Assessment of Need
113(18)
DEFINITIONS OF NEED
114(1)
SOURCES OF INFORMATION FOR THE ASSESSMENT OF NEED
115(10)
Describing the Current Situation
116(1)
Social Indicators of Need
117(2)
Community Surveys of Need
119(2)
Services Already Available
121(1)
Key Informants
122(1)
Focus Groups and Open Forums
123(2)
INADEQUATE ASSESSMENT OF NEED
125(3)
Failing to Examine Need
125(1)
Failing to Examine the Context of Need
126(1)
Failing to Relate Need to Implementation Plans
127(1)
Failing to Deal with Ignorance of Need
127(1)
USING NEEDS ASSESSMENTS IN PROGRAM PLANNING
128(3)
Summary and Preview
129(1)
Study Questions
130(1)
Additional Resource
130(1)
7 Monitoring the Operation of Programs
131(21)
MONITORING PROGRAMS AS A MEANS OF EVALUATING PROGRAMS
132(2)
WHAT TO SUMMARIZE WITH INFORMATION SYSTEMS
134(1)
Relevant Information
134(1)
Actual State of Program
134(1)
Program Participants
135(1)
Providers of Services
135(1)
PROGRAM RECORDS AND INFORMATION SYSTEMS
135(13)
Problems with Agency Records
135(1)
Increasing the Usefulness of Records
136(1)
How Records Can Be Used to Monitor Programs
136(7)
Reporting Information Separately for Each Therapist
143(1)
Developing Information Systems for Agencies
144(2)
Threatening Uses of Information Systems
146(2)
AVOIDING COMMON PROBLEMS IN IMPLEMENTING AN INFORMATION SYSTEM
148(4)
Guard Against the Misuse of the Information
149(1)
Avoid Setting Arbitrary Standards
149(1)
Avoid Serving the Needs of Only One Group
149(1)
Avoid Duplicating Records
150(1)
Avoid Adding to the Work of the Staff
150(1)
Avoid a Focus on Technology
150(1)
Summary and Preview
150(1)
Study Questions
151(1)
Additional Resource
151(1)
8 Qualitative Evaluation Methods
152(19)
EVALUATION SETTINGS BEST SERVED BY QUALITATIVE EVALUATIONS
153(3)
Admission to Graduate Studies
154(1)
Dissatisfaction with a Library Collection
154(1)
Evaluating a Political Campaign
155(1)
GATHERING QUALITATIVE INFORMATION
156(7)
The Central Importance of the Observer
156(1)
Observational Methods
157(3)
CASE STUDY 2: USING QUALITATIVE METHODS IN AN EVALUATION OF A UNIVERSITY LIBRARY
160(1)
Interviewing to Obtain Qualitative Information
160(3)
CARRYING OUT NATURALISTIC EVALUATIONS
163(3)
Phase One: Making Unrestricted Observations
163(1)
Phase Two: Integrating Impressions
164(1)
Phase Three: Sharing Interpretations
164(1)
Phase Four: Preparing Reports
164(1)
Are Qualitative Evaluations Subjective?
165(1)
COORDINATING QUALITATIVE AND QUANTITATIVE METHODS
166(2)
The Substance of the Evaluation
166(1)
Getting Insights from the Most Successful Participants
167(1)
Changing Emphases as Understanding Expands
167(1)
The Evaluation Questions
167(1)
Cost of Evaluation
168(1)
PHILOSOPHICAL ASSUMPTIONS
168(3)
Summary and Preview
169(1)
Study Questions
170(1)
Additional Resource
170(1)
9 Single-Group, Nonexperimental Outcome Evaluations
171(21)
SINGLE-GROUP EVALUATION DESIGNS
171(2)
Observe Only After the Program
171(1)
Observe Before and After the Program
172(1)
USES OF SINGLE-GROUP, DESCRIPTIVE DESIGNS
173(7)
Did the Participants Meet a Criterion?
173(1)
Did the Participants Improve?
173(1)
CASE STUDY 3: A PRETEST-POSTTEST DESIGN TO EVALUATE A PEER-BASED PROGRAM TO PREVENT SKIN CANCER
174(1)
Did the Participants Improve Enough?
174(3)
Relating Change to Service Intensity and Participant Characteristics
177(3)
THREATS TO INTERNAL VALIDITY
180(6)
Actual but Nonprogram-Related Changes in the Participants
180(1)
Apparent Changes Dependent on Who Was Observed
181(3)
Changes Related to Methods of Obtaining Observations
184(1)
Effects of Interactions of These Threats
185(1)
Internal Validity Threats Are Double-Edged Swords
185(1)
CONSTRUCT VALIDITY IN PRETEST-POSTTEST DESIGNS
186(1)
OVERINTERPRETING THE RESULTS OF SINGLE-GROUP DESIGNS
187(1)
USEFULNESS OF SINGLE-GROUP DESIGNS AS INITIAL APPROACHES TO PROGRAM EVALUATION
188(5)
Assessing the Usefulness of Further Evaluations
188(1)
Correlating Improvement with Other Variables
189(1)
Preparing the Facility for Further Evaluation
189(1)
Summary and Preview
190(1)
Study Questions
190(1)
Additional Resource
191(1)
10 Quasi-Experimental Approaches to Outcome Evaluation 192(23)
MAKING NUMEROUS OBSERVATIONS
193(3)
TIME-SERIES DESIGNS
196(5)
Patterns of Outcomes Over Time Periods
197(1)
Analysis of Time-Series Designs
198(3)
OBSERVING OTHER GROUPS
201(5)
Nonequivalent Control Group Designs
201(1)
Problems in Selecting Comparison Groups
202(7)
CASE STUDY 4: NONEQUIVALENT CONTROL GROUPS USED TO EVALUATE AN EMPLOYEE INCENTIVE PLAN
205(1)
REGRESSION-DISCONTINUITY DESIGN
206(3)
OBSERVING OTHER DEPENDENT VARIABLES
209(1)
COMBINING DESIGNS TO INCREASE INTERNAL VALIDITY
209(6)
Time-Series and Nonequivalent Control Groups
209(2)
Selective Control Design
211(2)
Summary and Preview
213(1)
Study Questions
214(1)
Additional Resource
214(1)
11 Using Experiments to Evaluate Programs 215(16)
EXPERIMENTS IN PROGRAM EVALUATION
215(3)
Benefits of Experiments
215(1)
Experimental Designs
216(2)
OBJECTIONS TO EXPERIMENTATION
218(2)
Don't Experiment on Mel
218(1)
We Already Know What Is Best
218(1)
I Know What Is Best for My Client
219(1)
Experiments Are Just Too Much Trouble
220(1)
THE MOST DESIRABLE TIMES TO CONDUCT EXPERIMENTS
220(4)
When a New Program Is Introduced
221(1)
When Stakes Are High
221(1)
When There Is Controversy About Program Effectiveness
222(1)
When Policy Change Is Desired
222(1)
CASE STUDY 5: TEACHING DOCTORS COMMUNICATION SKILLS: AN EVALUATION WITH RANDOM ASSIGNMENT AND PRETESTS
223(1)
When Demand Is High
223(1)
GETTING THE MOST OUT OF AN EXPERIMENTAL DESIGN
224(8)
Take Precautions Before Data Collection
224(2)
Keep Track of Randomization While the Experiment Is in Progress
226(1)
Analyze the Data Reflectively
227(2)
Summary and Preview 229 Study Questions
229(1)
Additional Resource
230(1)
12 Analyses of Costs and Outcomes 231(19)
COST ANALYSES AND BUDGETS
232(3)
Types of Costs
232(2)
An Example Budget
234(1)
The Necessity of Examining Costs
234(1)
COMPARING OUTCOMES TO COSTS
235(5)
The Essence of Cost-Benefit Analysis
236(2)
The Essence of Cost-Effectiveness Analysis
238(2)
When Outcomes Cannot Be Put into the Same Units
240(1)
SOME DETAILS OF COST ANALYSES
240(6)
Units of Analysis Must Reflect the Purpose of the Program
240(1)
Future Costs and Benefits Are Estimated
241(2)
Who Pays the Costs and Who Reaps the Benefits?
243(2)
CASE STUDY 6: THE VALUE OF PROVIDING SMOKING CESSATION CLINICS FOR EMPLOYEES ON COMPANY TIME
244(1)
Using Cost-Benefit and Cost-Effectiveness Analyses
245(1)
MAJOR CRITICISMS OF COST ANALYSES
246(5)
The Worth of Psychological Benefits Is Hard to Estimate
246(1)
Placing a Value on Lives Seems Wrong
246(2)
Cost-Benefit and Cost-Effectiveness Analyses Require Many Assumptions
248(1)
Summary and Preview
248(1)
Study Questions
249(1)
Additional Resource
249(1)
13 Evaluation Reports: Interpreting and Communicating Findings 250(18)
DEVELOPING A COMMUNICATION PLAN
251(2)
Explore Stakeholder Information Needs
251(1)
Plan Reporting Meetings
251(2)
Set a Communication Schedule
253(1)
PERSONAL PRESENTATIONS OF FINDINGS
253(4)
Need for Personal Presentations
253(1)
Content of Personal Presentations
254(2)
Audience for the Personal Presentations
256(1)
Distributing Drafts of Reports
256(1)
CONTENT OF FORMAL WRITTEN EVALUATION REPORTS
257(9)
Remember the Purposes of the Formal Report
257(1)
Provide an Outline and a Summary
258(1)
Describe the Context of the Evaluation
258(2)
Describe the Program Participants
260(1)
Justify the Criteria Selected
260(1)
Describe the Data-Gathering Procedures
260(1)
Provide the Findings
261(3)
Develop Recommendations
264(1)
Formal Reports Should Look Attractive
265(1)
PROVIDE PROGRESS REPORTS AND PRESS RELEASES
266(3)
Summary and Preview
266(1)
Study Questions
266(1)
Additional Resources
267(1)
14 How to Encourage Utilization 268(16)
OBSTACLES TO EFFECTIVE UTILIZATION
269(2)
Constraints on Managers
269(1)
Value Conflicts Among Stakeholders
269(1)
Misapplied Methodology
270(1)
Evaluating a Program at Arm's Length
270(1)
DEALING WITH MIXED FINDINGS
271(2)
Don't Abdicate Your Responsibility
271(1)
Don't Take the Easy Way Out
271(1)
Show How to Use the Evaluation to Improve the Program
272(1)
USING EVALUATIONS WHEN AN INNOVATIVE PROGRAM SEEMS NO BETTER THAN OTHER TREATMENTS
273(2)
When Can Evaluators Be Sure Groups Do Not Differ?
273(1)
Are Evaluations Valuable Even When No Advantage for the Innovation Is Found?
274(1)
CASE STUDY 7: EVALUATIONS OF THE OUTCOMES OF BOOT CAMP PRISONS: THE VALUE OF FINDING NO DIFFERENCES BETWEEN PROGRAM AND COMPARISON GROUPS
275(1)
DEVELOPING A LEARNING CULTURE
275(5)
Work with Stakeholders
275(1)
Adopt Developmental Interpretations
276(1)
Frame Findings in Terms of Improvements
277(1)
Treat Findings as Tentative Indicators, Not Final Answers
278(1)
Recognize Service Providers' Needs
278(1)
Keep Evaluation Findings on the Agency's Agenda
279(1)
THE EVALUATION ATTITUDE
280(4)
Summary and Possible Trends for Program Evaluation
281(1)
Study Questions
282(1)
Additional Resource
283(1)
APPENDIX: ILLUSTRATIVE PROGRAM EVALUATION REPORT 284(13)
REFERENCES 297(28)
NAME INDEX 325(7)
SUBJECT INDEX 332

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program