What is included with this book?
W. James Popham is a nationally recognized expert on educational testing. For 30 years he taught courses at UCLA in instructional methods for prospective teachers, and courses in evaluation and measurement. He has also authored over 25 books in the fields of curriculum, instruction, and assessment.
Assessing Students’ Affect
What Is Affect?
One Coin in Curriculum’s 3-Coin Fountain
Potentially Assessable Affective Targets
Why Assess Affect?
A Three-Part Strategy for Assessing Students’ Affect
Anonymity: An Imperative
Four Anonymity-Enhancement Tactics
Group-Focused Inferences Only
Building Your Own Affective Inventories
Likert Affective Inventories
Multifocus Affective Inventories
Appropriate and Inappropriate Tests for Evaluating Schools
The Emergence of Test-Based Accountability
A Source of National Pride
The Arrival of ESEA
A Profession Sleeps
What Can Be Done?
Two Types of Instructionally Insensitive Tests
Traditionally Constructed Standardized Achievement Tests
In Pursuit of Score-Spread
Linking Items to Suitably Spread Variables
Standards-Based Tests Built for a Particular State
Instructionally Sensitive Accountability Tests
A Manageable Number of Extraordinarily Significant Curricular Aims
Succinct, Teacher-Palatable Assessment Descriptions
Reports for Each Curricular Aim for Individual Students
Instructionally Meaningful Reports
A Continuum of Instructional Sensitivity
College Entrance Examinations – The SAT and the ACT
The Role of College Entrance Tests
A Fixed-Quota Quandary
A Crucial Insight
Plain Talk about the SAT and the ACT
Major Differences and Similarities: An Overview
The SAT: Background and Description
The ACT: Background and Description
Mission-Governed Test Making
Interpreting the Results of Large-Scale Assessments
What Makes a Test Standardized?
Two Interpretive Frameworks
Sometimes a Choice of Interpretations
Comparing Three Score-Interpretation Methods
Validity: Assessment’s Cornerstone
What is Assessment Validity?
Words and Meanings
Content-Related Evidence of Validity
Webb’s Alignment Approach
Content-Related Evidence of Validity for Large-Scale Tests
Content-Related Evidence of Validity for Classroom Tests
Criterion-Related Evidence of Validity
Construct-Related Evidence of Validity
Constructed-Response Tests: Building and Bettering
Payoffs and Perils of Constructed-Response Items
Rules for Item Generation
General Item-Development Commandments
Bettering Constructed-Response Items
Scoring Responses to Essay Items
How Testing Can Help Teaching
Test Influenced Instructional Decisions
En Route Assessment and a Potentially Potent Process
Too Much Time and Too Much Trouble?
How Many Suitably-Sized Curricular Aims?
Tests as Curricular Clarifiers
Selected-Response Tests: Building and Bettering
Payoffs and Perils of Selected-Response Items
Efficiency and Coverage
Overbooking on Memory-Focused Items
Rules for Item Generation
General Test-Development Commandments
Binary Choice Items
Multiple Choice Items
Improving Selected-Response Items
Assessment Bias: How to Banish It
The Nature of Assessment Bias
Disparate Impact: A Clue, Not a Verdict
Three Common Sources of Assessment Bias
Socioeconomic Bias, Assessment’s Closeted Skeleton
Test Preparation: Sensible or Sordid?
Forensics of Fraud
Long-Standing Pressures to Raise Test Scores
Malevolence or Ignorance?
Tawdry Accountability Tests
The Professional Ethics Guideline
The Educational Defensibility Guideline
Common Test-Preparation Activities
Teaching to the Test
Assessing Students with Disabilities
Must Students with Disabilities be Assessed on the Same Curricular Aims as Other Students?
A Mini-History of Pertinent Federal Law
Identical Curricular Aims
Federal Law and Courtroom Rulings
Not Altering What’s Being Measured
The CCSSO Accommodations Manual
Appropriate Expectations for Students with Disabilities
A Personal Opinion
Reliability: What Is It and Is It Necessary?
Consistency, Consistency, Consistency
Reliability and Validity
Categories of Reliability Evidence
Score Consistency and Classification Consistency
Internal Consistency Reliability
The Standard Error of Measurement: A Terrific Tool for Teachers
Portfolio Assessment and Performance Testing
Performance Assessment Defined
Two Qualitative Concerns
A Capsule Judgment About Portfolio Assessment and Performance Tests
The Role of Rubrics in Testing and Teaching
What’s In a Name?
When To Use a Rubric?
Why Use a Rubric?
What’s In a Rubric?
Quality Distinctions for the Evaluative Criteria
An Unwarranted Reverence for Rubrics
Rubrics – The Rancid and the Rapturous
Varied Formats, Qualitative Differences
A Rubric to Evaluate Rubrics
Classroom Evidence of Successful Teaching
A Professional’s Responsibility
Countering Flawed School Appraisals
Cross-Sectional versus Longitudinal Evaluation Designs
Evidence-Enhancing Evaluative Procedures
The Importance of Pretesting Students
The Split-and-Switch Design
Implementing a Split-and-Switch Design
Strengths and Cautions