did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780875814445

Evaluation in the Human Services

by ; ;
  • ISBN13:

    9780875814445

  • ISBN10:

    0875814441

  • Edition: 6th
  • Format: Paperback
  • Copyright: 2001-01-01
  • Publisher: Wadsworth Publishing
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $53.95

Summary

EVALUATION IN THE HUMAN SERVICES is a straightforward introduction designed to be used as the primary text in introductory evaluation courses, case management courses, and applied research courses in social work and other related human service programs. It can also be used as a supplementary text in practice methods and administration courses that emphasize accountability. It focuses on the core content that students need in order to understand and appreciate the role of evaluation in professional social work practice. The book prepares students to participate in evaluative activities, become critical producers and consumers of professional evaluative literature, and develop a solid foundation for more advanced courses. Because organizations and practitioners are increasingly required to document the effect of their services not only at the program level, but also at the case level, the book emphasizes monitoring, which can easily be incorporated in the ongoing activities of the practitioner and the agency.

Table of Contents

Preface xiii
PART I: BUILDING BLOCKS OF EVALUATION
Introduction
3(30)
The Quality Improvement Process
7(1)
Why Should We Do Evaluations?
8(6)
Increase our Knowledge Base
8(2)
Guide Decision Making
10(3)
Demonstrate Accountability
13(1)
Assure That Client Objectives Are Met
14(1)
Fears about Evaluation
14(2)
Scope of Evaluations
16(1)
Quality Improvement Approaches
17(8)
The Project Approach
17(3)
The Monitoring Approach
20(5)
Program Learning
25(2)
Summing Up and Looking Ahead
27(1)
Key Terms
28(1)
Study Questions
29(1)
References and Further Readings
30(3)
What Is a Program?
33(26)
Social Service Agencies
33(4)
Agency Mission Statements
34(1)
Agency Goals
35(1)
Agency Objectives
36(1)
Social Service Programs
37(5)
An Agency Versus a Program
42(1)
Program Logic Models
42(1)
Program Goals
43(4)
Unintended Program Results
45(1)
Program Goals Versus Agency Goals
45(2)
Types of Program Objectives
47(1)
Knowledge-Based
47(1)
Affective-Based
48(1)
Behaviorally Based
48(1)
Qualities of Program Objectives
48(3)
Meaningful
49(1)
Specific
50(1)
Measurable
50(1)
Directional
51(1)
Program and Practice Objectives
51(2)
Program Activities
53(2)
Program Logic Model: An Example
55(1)
Summing Up and Looking Ahead
55(1)
Key Terms
56(1)
Study Questions
56(1)
References and Further Readings
57(2)
Types of Evaluations
59(30)
Needs Assessment
60(7)
Visibility of Social Problems
61(1)
Wants, Demands, and Needs
61(1)
Perceived and Expressed Needs
62(1)
Need Assessment Questions
63(4)
Evaluability Assessment
67(2)
Program Development
67(1)
Teamwork
68(1)
Evaluability Assessment Questions
69(1)
Process Evaluation
69(4)
Formative and Summative Evaluations
70(1)
Client Flow Charts
71(2)
Process Evaluation Questions
73(1)
Outcome Evaluation
73(6)
Outcome and Outputs
75(1)
Benchmarking
76(1)
Consumer Satisfaction
77(1)
Outcome Questions
78(1)
Cost-Benefit Evaluation
79(5)
Figuring Costs
81(1)
Valuing Human Experiences
82(2)
Cost-Benefit Questions
84(1)
Summing Up and Looking Ahead
84(1)
Key Terms
85(1)
Study Questions
86(1)
References and Further Readings
86(3)
Planning and Focusing an Evaluation
89(24)
Evaluation as Representation
90(2)
Common Characteristics of All Evaluations
92(3)
Program Models
92(1)
Resource Constraints
93(1)
Evaluation Tools
94(1)
Politics and Ethics
94(1)
Cultural Considerations
95(1)
Presentation of Evaluation Findings
95(1)
Focusing an Evaluation
95(2)
Planning with Stakeholders
97(1)
Identifying Data Needs
98(2)
Selecting What to Monitor
100(8)
Client Demographics
100(2)
Service Statistics
102(1)
Quality Standards
103(2)
Feedback
105(1)
Client Outcomes
106(2)
Summing Up and Looking Ahead
108(1)
Key Terms
108(1)
Study Questions
109(1)
References and Further Readings
110(3)
PART II: TOOLS OF EVALUATION
Data Sources, Sampling, and Data Collection
113(30)
Data Sources
114(2)
Sampling
116(4)
Data Collection
120(12)
Obtaining Existing Data
120(5)
Obtaining New Data
125(7)
Fitting Data Collection to the Program
132(3)
Ease of Use
132(2)
Appropriateness to the Flow of Program Operations
134(1)
Design with User Input
135(1)
Developing a Data Collection Monitoring System
135(1)
Summing Up and Looking Ahead
136(1)
Key Terms
136(3)
Study Questions
139(1)
References and Further Readings
140(3)
Measurement
143(36)
Why Measurement is Necessary
143(3)
Objectivity
145(1)
Precision
145(1)
Types of Measuring Instruments
146(12)
Rating Scales
147(3)
Summated Scales
150(4)
Goal Attainment Scaling (GAS)
154(4)
Standardized Measuring Instruments
158(2)
Locating Standardized Measuring Instruments
160(4)
Publishers
160(2)
Professional Journals and Books
162(2)
Evaluating Measuring Instruments
164(9)
Validity
165(1)
Reliability
166(1)
Sensitivity
167(1)
Nonreactivity
167(1)
Representativeness of the Sample
168(1)
Utility
168(5)
Summing Up and Looking Ahead
173(1)
Key Terms
173(1)
Study Questions
174(1)
References and Further Readings
175(4)
Case-Level Evaluations
179(30)
Informal Case-Level Evaluations
179(2)
Case Consultations
180(1)
Case Conferences
180(1)
Formal Case-Level Evaluations
181(5)
Establishing Baselines
182(1)
Measurable Practice Objectives
182(1)
Repeated Measurements
183(1)
Graphic Data Displays
183(3)
Comparison Across Phases
186(1)
Evaluation Design Continuum
186(12)
Exploratory Designs
187(3)
Descriptive Designs
190(4)
Explanatory Designs
194(4)
Advantages of Case-Level Designs
198(3)
The Practitioner Is Responsible for Data Collection
199(1)
The Focus Is on the Client
199(1)
Clinical Decisions Are Based on Data Collected
200(1)
Problem Analysis Is Facilitated
200(1)
The Cost Is Small In Time and Disruptiveness
200(1)
Client's Situation Is Taken into Account
201(1)
Data Can Be Used in Program-Level Evaluations
201(1)
Limitations of Case-Level Designs
201(2)
Limited Generalizability
202(1)
Explanatory Designs Are Difficult to Implement
202(1)
Summing Up and Looking Ahead
203(1)
Key Terms
203(1)
Study Questions
204(2)
References and Further Readings
206(3)
Program-Level Evaluations
209(48)
The Evaluation Continuum
209(2)
Explanatory Designs
210(1)
Descriptive Designs
210(1)
Exploratory Designs
210(1)
Choosing a Design
211(1)
Characteristics of ``Ideal'' Evaluations
211(9)
Time Order of the Intervention
212(1)
Manipulation of the Intervention
213(1)
Relationship Between Interventions and Objectives
214(1)
Control of Rival Hypotheses
214(6)
Internal and External Validity
220(7)
Threats to Internal Validity
220(6)
Threats to External Validity
226(1)
Program-Level Evaluation Designs
227(2)
Exploratory Designs
229(8)
One-Group Posttest-Only Design
230(2)
Multigroup Posttest-Only Design
232(1)
Longitudinal Case Study Design
233(1)
Longitudinal Survey Design
234(3)
Descriptive Designs
237(7)
Randomized One-Group Posttest-Only Design
237(2)
Randomized Cross-Sectional and Longitudinal Survey Design
239(2)
One-Group Pretest-Posttest Design
241(1)
Comparison Group Posttest-Only Design
241(1)
Comparison Group Pretest-Posttest Design
242(1)
Interrupted Time-Series Design
243(1)
Explanatory Designs
244(3)
Classical Experimental Design
245(1)
Randomized Posttest-Only Control Group Design
246(1)
Summing Up and Looking Ahead
247(1)
Key Terms
247(3)
Study Questions
250(2)
References and Further Readings
252(5)
PART III: ISSUES OF EVALUATION
Politics, Ethics, and Standards
257(28)
Politics of Evaluation
257(1)
Appropriate and Inappropriate Uses of Evaluation
258(5)
Misuses of Evaluation
258(3)
Proper Uses of Evaluation
261(2)
Political Influences on the Evaluation Process
263(4)
Manipulating the Evaluation Process
263(1)
Misdirecting the Evaluation Process
264(3)
Professional Standards for Evaluation
267(8)
Utility
268(1)
Feasibility
269(1)
Propriety
269(6)
Accuracy
275(1)
Other Standards
275(1)
Principles of Evaluation Practice
276(3)
Evaluation and Service Delivery Activities Should Be Integrated
276(1)
Involve from the Beginning as Many Stakeholder Groups as Possible
277(1)
Involve All Levels of Staff in the Evaluation Process
278(1)
Make Explicit the Purpose of the Evaluation
278(1)
Provide a Balanced Report and Disseminate Early and Regularly
279(1)
Summing Up and Looking Ahead
279(1)
Key Terms
280(1)
Study Questions
280(2)
References and Further Readings
282(3)
Culturally Appropriate Evaluations
285(22)
The Impact of Culture
286(1)
Bridging the Culture Gap
287(3)
Cultural Awareness
287(2)
Intercultural Communications
289(1)
Cultural Frameworks
290(4)
Orientation to Information
291(1)
Decision Making
291(1)
Individualism
292(1)
Tradition
292(1)
Pace of Life
293(1)
Putting it Together
294(6)
Cultural Awareness
294(1)
Intercultural Communication Skills
295(1)
Developing Specific Knowledge About the Culture
296(1)
Adapting Evaluations
297(3)
Summing Up and Looking Ahead
300(1)
Key Terms
301(1)
Study Questions
301(1)
References and Further Readings
302(5)
PART IV: UTILITY OF EVALUATION
Developing a Data Information System
307(26)
Staff Members' Roles in Developing a Data Information System
308(1)
Establishing an Organizational Plan
309(4)
Case-Level Data Collection
310(3)
Program-Level Data Collection
313(1)
Data Collection at Intake
314(12)
Data Collection at Client Contact
318(4)
Data Collection at Termination
322(1)
Data Collection to Obtain Feedback
323(3)
Data Management
326(1)
Manual Data Management
326(3)
Computer-Assisted Data Management
327(1)
Reporting
328(1)
Summing Up and Looking Ahead
329(1)
Key Terms
330(1)
Study Questions
330(1)
References and Further Readings
331(2)
Decision Making with Objective and Subjective Data
333(28)
Objective Data
333(2)
Subjective Data
335(1)
Case-Level Decision Making
336(13)
The Engagement and Problem-Definition Phase
336(2)
The Practice Objective--Setting Phase
338(1)
The Intervention Phase
339(7)
The Termination and Follow-Up Phase
346(3)
Program-Level Decision Making
349(5)
Process
350(1)
Outcome
350(4)
Using Outcome Monitoring Data In Program-Level Decision Making
354(3)
Acceptable Results
355(1)
Mixed Results
355(1)
Inadequate Results
356(1)
Summing Up
357(1)
Study Questions
357(4)
Credits 361(2)
Index 363

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program