did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9781579221928

Thirteen Strategies to Measure College Teaching

by ;
  • ISBN13:

    9781579221928

  • ISBN10:

    1579221920

  • Format: Hardcover
  • Copyright: 2006-05-30
  • Publisher: Stylus Pub Llc

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $125.00 Save up to $103.37
  • Rent Book $83.13
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 7-10 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

Supplemental Materials

What is included with this book?

Summary

* Student evaluations of college teachers: perhaps the most contentious issue on campus * This book offers a more balanced approach * Evaluation affects pay, promotion and tenure, so of intense interest to all faculty * Major academic marketing and publicity * Combines original research with Berk's signature wacky humor To many college professors the words "student evaluations" trigger mental images of the shower scene from Psycho, with those bloodcurdling screams. They're thinking: "Why not just whack me now, rather than wait to see those ratings again." This book takes off from the premise that student ratings are a necessary, but not sufficient source of evidence for measuring teaching effectiveness. It is a fun-filled--but solidly evidence-based--romp through more than a dozen other methods that include measurement by self, peers, outside experts, alumni, administrators, employers, and even aliens. As the major stakeholders in this process, both faculty AND administrators, plus clinicians who teach in schools of medicine, nursing, and the allied health fields, need to be involved in writing, adapting, evaluating, or buying items to create the various scales to measure teaching performance. This is the first basic introduction in the faculty evaluation literature to take you step-by-step through the process to develop these tools, interpret their scores, and make decisions about teaching improvement, annual contract renewal/dismissal, merit pay, promotion, and tenure. It explains how to create appropriate, high quality items and detect those that can introduce bias and unfairness into the results. Ron Berk also stresses the need for 'triangulation'--the use of multiple, complementary methods--to provide the properly balanced, comprehensive and fair assessment of teaching that is the benchmark of employment decision making. This is a must-read to empower faculty, administrators, and clinicians to use appropriate evidence to make decisions accurately, reliably, and fairly. Don't trample each other in your stampede to snag a copy of this book!

Table of Contents

Acknowledgments xv
A Foreword (in Berkian Style) by Mike Theall xix
Introduction 1(8)
1 TOP 13 SOURCES OF EVIDENCE OF TEACHING EFFECTIVENESS 9(38)
A Few Ground Rules
10(1)
Teaching Effectiveness: Defining the Construct
11(3)
National Standards
12(1)
Beyond Student Ratings
13(1)
A Unified Conceptualization
13(1)
Thirteen Sources of Evidence
14(30)
Student Ratings
15(4)
Peer Ratings
19(3)
External Expert Ratings
22(1)
Self-Ratings
23(1)
Videos
24(1)
Student Interviews
25(2)
Exit and Alumni Ratings
27(1)
Employer Ratings
28(1)
Administrator Ratings
29(1)
Teaching Scholarship
30(1)
Teaching Awards
31(1)
Learning Outcome Measures
32(2)
Teaching Portfolio
34(3)
BONUS: 360° Multisource Assessment
37(7)
Berk's Top Picks
44(1)
Formative Decisions
45(1)
Summative Decisions
45(1)
Program Decisions
45(1)
Decision Time
45(2)
2 CREATING THE RATING SCALE STRUCTURE 47(18)
Overview of the Scale Construction Process
48(1)
Specifying the Purpose of the Scale
48(2)
Delimiting What Is to Be Measured
50(4)
Focus Groups
50(1)
Interviews
50(1)
Research Evidence
51(3)
Determining How to Measure the "What"
54(4)
Existing Scales
55(1)
Item Banks
55(1)
Commercially Published Student Rating Scales
56(2)
Universe of Items
58(2)
Structure of Rating Scale Items
60(5)
Structured Items
60(2)
Unstructured Items
62(3)
3 GENERATING THE STATEMENTS 65(20)
Preliminary Decisions
66(1)
Domain Specifications
66(1)
Number of Statements
66(1)
Rules for Writing Statements
66(19)
1. The statement should be clear and direct.
69(1)
2. The statement should be brief and concise.
69(2)
3. The statement should contain only one complete behavior, thought, or concept.
71(1)
4. The statement should be a simple sentence.
72(1)
5. The statement should be at the appropriate reading level.
72(1)
6. The statement should be grammatically correct.
73(1)
7. The statement should be worded strongly.
73(1)
8. The statement should be congruent with the behavior it is intended to measure.
74(1)
9. The statement should accurately measure a positive or negative behavior.
74(1)
10. The statement should be applicable to all respondents.
75(1)
11. The respondents should be in the best position to respond to the statement.
76(2)
12. The statement should be interpretable in only one way.
78(1)
13. The statement should NOT contain a double negative.
78(1)
14. The statement should NOT contain universal or absolute terms.
79(1)
15. The statement should NOT contain nonabsolute, warm-and fuzzy terms.
79(1)
16. The statement should NOT contain value-laden or inflammatory words.
80(1)
17. The statement should NOT contain words, phrases, or abbreviations that would be unfamiliar to all respondents.
81(1)
18. The statement should NOT tap a behavior appearing in any other statement.
81(1)
19. The statement should NOT be factual or capable of being interpreted as factual.
82(1)
20. The statement should NOT be endorsed or given one answer by almost all respondents or by almost none.
83(2)
4 SELECTING THE ANCHORS 85(20)
Types of Anchors
86(8)
Intensity Anchors
86(1)
Evaluation Anchors
87(2)
Frequency Anchors
89(1)
Quantity Anchors
90(1)
Comparison Anchors
91(3)
Rules for Selecting Anchors
94(11)
1. The anchors should be consistent with the purpose of the rating scale.
94(2)
2. The anchors should match the statements, phrases, or word topics.
96(2)
3. The anchors should be logically appropriate with each statement.
98(1)
4. The anchors should be grammatically consistent with each question.
99(1)
5. The anchors should provide the most accurate and concrete responses possible.
100(1)
6. The anchors should elicit a range of responses.
101(1)
7. The anchors on bipolar scales should be balanced, not biased.
101(1)
8. The anchors on unipolar scales should be graduated appropriately.
102(3)
5 REFINING THE ITEM STRUCTURE 105(16)
Preparing for Structural Changes
106(2)
Issues in Scale Construction
108(13)
1. What rating scale format is best?
108(1)
2. How many anchor points should be on the scale?
109(1)
3. Should there be a designated midpoint position, such as "Neutral," "Uncertain," or "Undecided," on the scale?
110(1)
4. How many anchors should be specified on the scale?
111(1)
5. Should numbers be placed on the anchor scale?
112(1)
6. Should a "Not Applicable" (NA) or "Not Observed" (NO) option be provided?
113(2)
7. How can response set biases be minimized?
115(6)
6 ASSEMBLING THE SCALE FOR ADMINISTRATION 121(20)
Assembling the Scale
122(8)
Identification Information
122(1)
Purpose
123(1)
Directions
124(1)
Structured Items
125(4)
Unstructured Items
129(1)
Scale Administration
130(11)
Paper-Based Administration
131(2)
Online Administration
133(4)
Comparability of Paper-Based and Online Ratings
137(2)
Conclusions
139(2)
7 FIELD TESTING AND ITEM ANALYSES 141(20)
Preparing the Draft Scale for a Test Spin
142(1)
Field Test Procedures
143(5)
Mini-Field Test
143(4)
Monster-Field Test
147(1)
Item Analyses
148(13)
Stage 1: Item Descriptive Statistics
148(4)
Stage 2: Interitem and Item-Scale Correlations
152(3)
Stage 3: Factor Analysis
155(6)
8 COLLECTING EVIDENCE OF VALIDITY AND RELIABILITY 161(24)
Validity Evidence
162(12)
Evidence Based on Job Content Domain
164(4)
Evidence Based on Response Processes
168(1)
Evidence Based on Internal Scale Structure
169(2)
Evidence Related to Other Measures of Teaching Effectiveness
171(1)
Evidence Based on the Consequences of Ratings
172(2)
Reliability Evidence
174(8)
Classical Reliability Theory
174(1)
Summated Rating Scale Theory
175(1)
Methods for Estimating Reliability
176(6)
Epilogue
182(3)
9 REPORTING AND INTERPRETING SCALE RESULTS 185(30)
Generic Levels of Score Reporting
186(8)
Item Anchor
186(1)
Item
187(3)
Subscale
190(1)
Total Scale
191(1)
Department/Program Norms
192(1)
Subject Matter/Program-Level State, Regional, and National Norms
193(1)
Criterion-Referenced versus Norm-Referenced Score Interpretations
194(3)
Score Range
194(1)
Criterion-Referenced Interpretations
195(1)
Norm-Referenced Interpretations
196(1)
Formative, Summative, and Program Decisions
197(16)
Formative Decisions
198(6)
Summative Decisions
204(8)
Program Decisions
212(1)
Conclusions
213(2)
References 215(26)
Appendices 241(40)
A. Sample "Home-Grown" Rating Scales
241(16)
B. Sample 360° Assessment Rating Scales
257(16)
C. Sample Reporting Formats
273(4)
D. Commercially Published Student Rating Scale Systems
277(4)
Index 281

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program