9780132732918

Mastering Assessment A Self-Service System for Educators

by
  • ISBN13:

    9780132732918

  • ISBN10:

    0132732912

  • Edition: 2nd
  • Format: Package
  • Copyright: 7/12/2011
  • Publisher: Pearson
  • Purchase Benefits
  • Free Shipping On Orders Over $59!
    Your order must be $59 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $176.99 Save up to $26.55
  • Buy New
    $150.44
    Add to Cart Free Shipping

    CURRENTLY AVAILABLE, USUALLY SHIPS IN 24-48 HOURS

Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

Summary

MASTERING ASSESSMENT BOXSET INCLUDES: " Appropriate and Inappropriate Tests for Evaluating Schools " Assessing Students#x19; Affect " Assessing Students with Disabilities " Assessment Bias: How to Banish It " Classroom Evidence of Successful Teaching " College Entrance Examinations: The SAT and the ACT " Constructed-Response Tests: Building and Bettering " How Testing Can Help Teaching " Interpreting the Results of Large-Scale Assessments " Portfolio Assessment and Performance Testing " Reliability: What Is It and Is It Necessary? " Selected-Response Tests: Building and Bettering " The Role of Rubrics in Testing andTeaching " Test Preparation: Sensible or Sordid? " Validity: Assessment#x19;s Cornerstone

Author Biography

W. James Popham is a nationally recognized expert on educational testing. For 30 years he taught courses at UCLA in instructional methods for prospective teachers, and courses in evaluation and measurement. He has also authored over 25 books in the fields of curriculum, instruction, and assessment.

Table of Contents

Assessing Students’ Affect

What Is Affect?

            One Coin in Curriculum’s 3-Coin Fountain

            Potentially Assessable Affective Targets

Why Assess Affect?

A Three-Part Strategy for Assessing Students’ Affect

            Self-Report Inventories

            Anonymity: An Imperative

            Four Anonymity-Enhancement Tactics

            Group-Focused Inferences Only

Building Your Own Affective Inventories

            Likert Affective Inventories

            Multifocus Affective Inventories

            Confidence Inventories

 

 

Appropriate and Inappropriate Tests for Evaluating Schools

The Emergence of Test-Based Accountability

            A Source of National Pride

            The Arrival of ESEA

            A Profession Sleeps

            What Can Be Done?

Two Types of Instructionally Insensitive Tests

            Traditionally Constructed Standardized Achievement Tests

            In Pursuit of Score-Spread

            Linking Items to Suitably Spread Variables

            Standards-Based Tests Built for a Particular State

Instructionally Sensitive Accountability Tests

            A Manageable Number of Extraordinarily Significant Curricular Aims

            Succinct, Teacher-Palatable Assessment Descriptions

            Reports for Each Curricular Aim for Individual Students

            Instructionally Meaningful Reports

A Continuum of Instructional Sensitivity

 

 

College Entrance Examinations – The SAT and the ACT

The Role of College Entrance Tests

            A Fixed-Quota Quandary

            Predictive Power

            A Crucial Insight

Plain Talk about the SAT and the ACT

            Major Differences and Similarities: An Overview

            The SAT: Background and Description

            The ACT: Background and Description

Mission-Governed Test Making

 

 

Interpreting the Results of Large-Scale Assessments

What Makes a Test Standardized?

Score Interpretation

            Two Interpretive Frameworks

            Sometimes a Choice of Interpretations

            Percentiles

            Grade-Equivalent Scores

            Scale Scores

            Comparing Three Score-Interpretation Methods

Accuracy Estimates

 

 

Validity: Assessment’s Cornerstone

What is Assessment Validity?

Score-Based Inferences

Words and Meanings

Validity Evidence

Content-Related Evidence of Validity

            Webb’s Alignment Approach

            Content-Related Evidence of Validity for Large-Scale Tests

            Content-Related Evidence of Validity for Classroom Tests

Criterion-Related Evidence of Validity

Construct-Related Evidence of Validity

 

 

Constructed-Response Tests: Building and Bettering

Payoffs and Perils of Constructed-Response Items

            Payoffs

            Perils

Rules for Item Generation

            General Item-Development Commandments

            Short-Answer Items

            Essay Items

Bettering Constructed-Response Items

Scoring Responses to Essay Items

 

 

How Testing Can Help Teaching

High-Stakes Tests

Test Influenced Instructional Decisions

Preassessment

En Route Assessment and a Potentially Potent Process

Postassessment

Too Much Time and Too Much Trouble?

            Grain Size

            How Many Suitably-Sized Curricular Aims?

Tests as Curricular Clarifiers

 

 

Selected-Response Tests: Building and Bettering

Payoffs and Perils of Selected-Response Items

            Efficiency and Coverage

            Overbooking on Memory-Focused Items

Rules for Item Generation

            General Test-Development Commandments

            Binary Choice Items

            Matching Items

            Multiple Choice Items

Improving Selected-Response Items

            Judgmental-Based Improvements

            Empirically-Based Improvements

 

 

Assessment Bias: How to Banish It

The Nature of Assessment Bias

            Offensiveness

            Unfair Penalization

            Inference Distortion

Disparate Impact: A Clue, Not a Verdict

Three Common Sources of Assessment Bias

            Racial/Ethnic Bias

            Gender Bias

            Socioeconomic Bias, Assessment’s Closeted Skeleton

Bias Detection

            Judgmental Approaches

            Empirical Approaches

 

 

Test Preparation: Sensible or Sordid?

Forensics of Fraud

            Long-Standing Pressures to Raise Test Scores

Malevolence or Ignorance?

Tawdry Accountability Tests

The Professional Ethics Guideline

The Educational Defensibility Guideline

Common Test-Preparation Activities

Teaching to the Test

 

 

Assessing Students with Disabilities

Must Students with Disabilities be Assessed on the Same Curricular Aims as Other Students?   

            A Mini-History of Pertinent Federal Law

            Identical Curricular Aims

            Federal Law and Courtroom Rulings

Accommodations

            Not Altering What’s Being Measured

            The CCSSO Accommodations Manual

Appropriate Expectations for Students with Disabilities

            Reducing Expectations

            Unaltered Expectations

            A Personal Opinion

 

 

Reliability: What Is It and Is It Necessary?

Consistency, Consistency, Consistency

            Reliability and Validity

            Categories of Reliability Evidence

Score Consistency and Classification Consistency

            Correlation-Based Reliability

            Score Consistency

            Classification Consistency

Stability Reliability

Alternate-Form Reliability

Internal Consistency Reliability

The Standard Error of Measurement: A Terrific Tool for Teachers

 

 

Portfolio Assessment and Performance Testing

Portfolio Assessment

Performance Tests

            Performance Assessment Defined

            Task Identification

            Two Qualitative Concerns

A Capsule Judgment About Portfolio Assessment and Performance Tests

 

 

The Role of Rubrics in Testing and Teaching

What’s In a Name?

When To Use a Rubric?

Why Use a Rubric?

What’s In a Rubric?

            Evaluative Criteria

            Quality Distinctions for the Evaluative Criteria

            Application Strategy

An Unwarranted Reverence for Rubrics

Rubrics – The Rancid and the Rapturous

            Hypergeneral Rubrics

            Task-Specific Rubrics

            Skill-Focused Rubrics

Judging Rubrics

            Varied Formats, Qualitative Differences

            A Rubric to Evaluate Rubrics

 

 

Classroom Evidence of Successful Teaching

A Professional’s Responsibility

Countering Flawed School Appraisals

Cross-Sectional versus Longitudinal Evaluation Designs

Evidence-Enhancing Evaluative Procedures

            The Importance of Pretesting Students

            Blind Scoring

            Nonpartisan Scoring

The Split-and-Switch Design

            Implementing a Split-and-Switch Design

            Strengths and Cautions

Rewards Program

Write a Review