did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780205142125

Statistical Methods in Education and Psychology

by ;
  • ISBN13:

    9780205142125

  • ISBN10:

    0205142125

  • Edition: 3rd
  • Format: Hardcover
  • Copyright: 1996-01-01
  • Publisher: Allyn & Bacon
  • View Upgraded Edition
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $185.00

Summary

The approach of SMEP-III is conceptual rather than mathematical. The authors stress the understanding, applications, and interpretation of concepts rather than derivation and proof or hand-computation.

Table of Contents

Preface xii
Introduction
1(5)
The ``Image'' of Statistics
1(1)
Descriptive Statistics
2(1)
Inferential Statistics
2(1)
Statistics and Mathematics
3(1)
Case Method
4(1)
Our Targets
5(1)
Variables, Measurement, Scales
6(9)
Variables and Their Measurement
6(1)
Measurement: The Observation of Variables
6(1)
Measurement Scales: Nominal Measurement
7(1)
Ordinal Measurement
7(1)
Interval Measurement
8(1)
Ratio Measurement
8(1)
Interrelationships among Measurement Scales
9(1)
Continuous and Discrete Variables
10(1)
Chapter Summary
11(1)
Case Study
11(1)
Suggested Computer Activities
12(3)
Mastery Test
12(1)
Answers to Mastery Test
13(2)
Frequency Distributions and Visual Displays of Data
15(34)
Tabulating Data
15(1)
Grouped Frequency Distributions
16(3)
Grouping and Loss of Information
19(1)
Graphing a Frequency Distribution: The Histogram
19(1)
Frequency and Percentage Polygons
20(3)
Types of Distributions
23(1)
Cumulative Distributions and the Ogive Curve
24(1)
Percentiles
25(1)
Box-and-Whisker Plots
26(3)
Stem-and-Leaf Displays
29(2)
Time-Series Graphs
31(1)
Misleading Graphs: How to Lie with Statistics
31(6)
Chapter Summary
37(1)
Case Study
37(4)
Suggested Computer Activity
41(8)
Mastery Test
41(3)
Problems and Exercises
44(3)
Answers to Mastery Test
47(1)
Answers to Problems and Exercises
47(2)
Measures of Central Tendency
49(17)
Introduction
49(1)
The Mode
49(2)
The Median
51(1)
Summation Notation
52(1)
The Mean
53(1)
More Summation Notation
53(1)
Adding or Subtracting a Constant
54(1)
Multiplying or Dividing by a Constant
54(1)
Sum of Deviations
55(1)
Sum of Squared Deviations
55(1)
The Mean of the Sum of Two or More Scores
56(1)
The Mean of a Difference
56(1)
Mean, Median, and Mode of Two or More Groups Combined
56(2)
Interpretation of Mode, Median, and Mean
58(1)
Central Tendency and Skewness
58(2)
Measures of Central Tendency as Inferential Statistics
60(1)
Which Measure is Best?
61(1)
Chapter Summary
61(1)
Case Study
61(2)
Suggested Computer Exercise
63(3)
Mastery Test
63(1)
Problems and Exercises
64(1)
Answers to Mastery Test
65(1)
Answers to Problems and Exercises
65(1)
Measures of Variability
66(14)
Introduction
66(1)
The Range
66(1)
H-Spread and the Interquartile Range
67(1)
Deviation Scores
67(1)
Sum of Squares
67(1)
More about the Summation Operator, Σ
68(1)
The Variance of a Population
69(1)
The Variance Estimated from a Sample
69(1)
The Standard Deviation
70(1)
The Effect of Adding or Subtracting a Constant on Measures of Variability
71(1)
The Effect of Multiplying or Dividing by a Constant on Measures of Variability
71(1)
Variance of a Combined Distribution
72(1)
Inferential Properties of the Range, s2, and s
73(2)
Chapter Summary
75(1)
Case Study
75(1)
Suggested Computer Exercise
76(4)
Mastery Test
77(1)
Problems and Exercises
78(1)
Answers to Mastery Test
78(1)
Answers to Problems and Exercises
79(1)
The Normal Distribution and Standard Scores
80(23)
The Importance of the Normal Distribution
80(1)
God Loves the Normal Curve
80(3)
The Standard Normal Distribution as a Standard Reference Distribution: z-Scores
83(2)
Ordinates of the Normal Distribution
85(1)
Areas under the Normal Curve
85(1)
Other Standard Scores
86(1)
T-Scores
87(2)
Areas under the Normal Curve in Samples
89(1)
Skewness
89(3)
Kurtosis
92(2)
Transformations
94(1)
Normalized Scores
94(1)
Chapter Summary
95(1)
Case Study
95(3)
Suggested Computer Exercise
98(5)
Mastery Test
98(2)
Problems and Exercises
100(1)
Answers to Mastery Test
101(1)
Answers to Problems and Exercises
102(1)
Correlations: Measures of Relationship Between Two Variables
103(49)
Introduction
103(1)
The Concept of Correlation
103(1)
Scatterplots
104(2)
The Measurement of Correlation
106(1)
The Use of Correlation Coefficients
107(1)
Interpreting r as a Percent
107(2)
Linear and Curvilinear Relationships
109(2)
Calculating the Pearson Product-Moment Correlation Coefficient, r
111(1)
A Computational Illustration of r
112(1)
Scatterplots
113(3)
Correlation Expressed in Terms of z-scores
116(1)
Linear Transformations and Correlation
117(1)
The Bivariate Normal Distribution
118(3)
Effects of Variability on Correlation
121(1)
Correcting for Restricted Variability
122(1)
Effect of Measurement Error on r and the Correction for Attentuation
123(4)
The Pearson r and Marginal Distributions
127(1)
The Effect of the Unit of Analysis on Correlation: Ecological Correlations
127(1)
The Variance of a Sum
128(1)
The Variance of a Difference
129(1)
Additional Measures of Relationship: The Spearman Rank Correlation rranks'
129(1)
The Phi Coefficient: Both X and Y are Dichotomies
130(3)
The Point-Biserial Coefficient
133(1)
The Biserial Correlation
134(2)
The Biserial versus Point-Biserial Correlation Coefficients
136(1)
The Tetrachoric Coefficient
136(1)
Causation and Correlation
137(3)
Chapter Summary
140(1)
Case Study
141(3)
Suggested Computer Exercise
144(8)
Mastery Test
144(3)
Problems and Exercises
147(3)
Answers to Mastery Test
150(1)
Answers to Problems and Exercises
150(2)
Regression and Prediction
152(47)
Purposes of Regression Analysis
152(1)
The Regression Effect
153(1)
The Regression Equation Expressed in Standard z-scores
154(1)
Use of Regression Equations
155(1)
Cartesian Coordinates
156(1)
Estimating Y from X: The Raw-Score Regression Equation
157(2)
Error of Estimate
159(1)
Proportion of Predictable Variance, r2
160(1)
Least-Squares Criterion
161(1)
Homoscedasticity and the Standard Error of Estimate
161(3)
Regression and Pretest-Posttest Gains
164(3)
Part Correlation
167(1)
Partial Correlation
168(1)
Second-Order Partial Correlation
169(1)
Multiple Regression and Multiple Correlation
170(1)
The Standardized Regression Equation
171(1)
The Raw-Score Regression Equation
172(1)
Multiple Correlation
173(2)
Multiple Regression Equations with Three or More Independent Variables
175(1)
Stepwise Multiple Regression
175(1)
Illustration of Stepwise Multiple Regression
176(1)
Dichotomous and Categorical Variables as Predictors
177(1)
The Standard Error of Estimate in Multiple Regression
178(1)
The Multiple Correlation as an Inferential Statistic: Correction for Bias
178(2)
Assumptions
180(1)
Curvilinear Regression and Correlation
180(1)
Measuring Nonlinear Relationships between Two Variables: η
180(2)
Transforming Nonlinear Relationships into Linear Relationships
182(1)
Dichotomous Dependent Variables: Logistic Regression
182(2)
Categorical Dependent Variables with More than Two Categories: Discriminant Analysis
184(1)
Chapter Summary
184(1)
Case Study
185(3)
Suggested Computer Activity
188(11)
Mastery Test
189(3)
Problems and Exercises
192(3)
Answers to Mastery Test
195(1)
Answers to Problems and Exercises
196(3)
Probability
199(24)
Introduction
199(1)
Probability as a Mathematical System
199(2)
First Addition Rule of Probabilities
201(2)
Second Addition Rule of Probabilities
203(2)
Multiplication Rule of Probabilities
205(1)
Conditional Probability
206(1)
Bayes's Theorem
207(1)
Permutations
208(1)
Combinations
209(1)
Binomial Probabilities
210(2)
The Binomial and Sign Test
212(1)
Intuition and Probability
213(2)
Probability as an Area
215(1)
Combining Probabilities
216(1)
Expectations and Moments
217(1)
Chapter Summary
218(1)
Suggested Computer Activity
219(4)
Mastery Test
219(1)
Problems and Exercises
220(1)
Answers to Mastery Test
221(1)
Answers to Problems and Exercises
221(2)
Statistical Inference: Sampling and Interval Estimation
223(32)
Overview
223(1)
Populations and Samples: Parameters and Statistics
223(1)
Infinite versus Finite Populations
224(1)
Randomness and Random Sampling
225(1)
Accidental or Convenience Samples
226(1)
Random Samples
226(2)
Independence
228(1)
Systematic Sampling
229(1)
Point and Interval Estimates
230(1)
Sampling Distributions
230(1)
The Standard Error of the Mean
231(1)
Relationship of σx to n
232(1)
Confidence Intervals
232(2)
Confidence Intervals when σ Is Known: An Example
234(1)
Central Limit Theorem: A Demonstration
235(4)
The Use of Sampling Distributions
239(1)
Proof that σ2=σ2/n
239(3)
Properties of Estimators
242(1)
Unbiasedness
242(2)
Consistency
244(1)
Relative Efficiency
245(2)
Chapter Summary
247(1)
Case Study
248(1)
Suggested Computer Activity
248(7)
Mastery Test
250(2)
Problems and Exercises
252(1)
Answers to Mastery Test
253(1)
Answers to Problems and Exercises
254(1)
Introduction to Hypothesis Testing
255(28)
Introduction
255(1)
Statistical Hypotheses and Explanations
255(1)
Statistical versus Scientific Hypotheses
256(1)
Testing Hypotheses about μ
257(1)
Testing Ho:μ=K, a One-Sample z-Test
258(1)
Two Types of Errors in Hypothesis Testing
259(2)
Hypothesis Testing and Confidence Intervals
261(1)
Type-II Error, β, and Power
262(1)
Power
263(1)
Effect of α on Power
263(1)
Power and the Value Hypothesized in the Alternative Hypothesis
264(2)
Methods of Increasing Power
266(1)
Nondirectional and Directional Alternatives: Two-Tailed versus One-Tailed Tests
267(2)
Statistical Significance versus Practical Significance
269(1)
Confidence Limits for the Population Median
269(1)
Inferences Regarding μ When σ Is Not Known: t versus z
270(1)
The t-Distribution
271(3)
Confidence Intervals Using the t-Distribution
274(1)
Accuracy of Confidence Intervals when Sampling Non-Normal Distributions
274(1)
Chapter Summary
275(1)
Case Study
276(7)
Mastery Test
277(2)
Problems and Exercises
279(2)
Answers to Mastery Test
281(1)
Answers to Problems and Exercises
281(2)
Inferences About the Difference between Two Means
283(36)
Introduction
283(1)
Testing Statistical Hypotheses Involving Two Means
283(1)
The Null Hypothesis, Ho: μ1-μ2=0
284(1)
The t-Test for Comparing Two Independent Means
284(1)
Computing sx1-x2
285(2)
An Illustration
287(2)
Confidence Intervals about Mean Differences
289(1)
Effect Size
289(1)
t-Test Assumptions and Robustness
290(3)
Homogeneity of Variance
293(2)
What if Sample Sizes Are Unequal and Variances Are Heterogeneous: The Welch t'Test
295(1)
Independence of Observations
295(1)
Testing Ho: μ1 = μ2 with Paired Observations
296(3)
Direct-Difference for the t-Test with Paired Observations
299(2)
Cautions Regarding the Matched-Pair Designs in Research
301(1)
Power when Comparing Means
302(1)
Non-Parametric Alternatives: The Mann-Whitney Test and the Wilcoxon Signed Rank Test
303(1)
Chapter Summary
304(1)
Case Study
305(5)
Suggested Computer Activity
310(9)
Mastery Test
310(3)
Problems and Exercises
313(3)
Answers to Mastery Test
316(1)
Answers to Problems and Exercises
317(2)
Statistics for Categorical Dependent Variables: Inferences about Proportions
319(30)
Overview
319(1)
The Proportion as a Mean
319(1)
The Variance of a Proportion
320(1)
The Sampling Distribution of a Proportion: The Standard Error of p
321(1)
The Influence of n on σp
322(1)
Influence of the Sampling Fraction on σp
323(1)
The Influence of &pie; on σp
324(1)
Confidence Intervals for &pie;
325(2)
Quick Confidence Intervals for &pie;
327(1)
Testing Ho: &pie;=K
328(2)
Testing Empirical versus Theoretical Distributions: The Chi-Square Goodness-of-Fit Test
330(3)
Testing Differences among Proportions: The Chi-Square Test of Association
333(2)
Other Formulas for the Chi-Square Test of Association
335(2)
The χ2 Median Test
337(1)
Chi-Square and the Phi Coefficient
337(1)
Independence of Observations
338(1)
Inferences about Ho: μ1 =μ2 when Observations are Paired: McNemar's Test for Correlated Proportions
339(1)
Chapter Summary
340(1)
Case Study
341(1)
Suggested Computer Activity
342(7)
Mastery Test
342(2)
Problems and Exercises
344(3)
Answers to Mastery Test
347(1)
Answers to Problems and Exercises
347(2)
Inferences About Correlation Coefficients
349(28)
Testing Statistical Hypotheses Regarding ρ
349(1)
Testing Ho: ρ = O Using the t-Test
350(3)
Directional Alternatives: ``Two-Tailed'' vs. ``One-Tailed'' Tests
353(1)
Sampling Distribution of r
354(1)
The Fisher Z-Transformation
355(2)
Setting Confidence Intervals for ρ
357(1)
Determining Confidence Intervals Graphically
358(1)
Testing the Difference between Independent Correlation Coefficients: Ho: ρ1 = ρ2
359(2)
Testing Differences among Several Independent Correlation Coefficients:
361(1)
Averaging r's
362(1)
Testing Differences between Two Dependent Correlation Coefficients: Ho: p31=p32
362(1)
Inferences about Other Correlation Coefficients
363(1)
The Point-Biserial Correlation Coefficient rpb
364(1)
Spearman's Rank Correlation: Ho: Pranks =0
365(1)
Partial Correlation, Hp: p123 = 0
365(1)
Significance of a Multiple Correlation Coefficient
366(1)
Statistical Significance in Stepwise Multiple Regression
367(1)
Significance of the Biserial Correlation Coefficient Rbis
368(1)
Significance of the Tetrachoric Correlation Coefficient rtet
369(1)
Significance of the Correlation Ratio, η2
369(1)
Testing for Nonlinearity of Regression
370(1)
Chapter Summary
371(1)
Case Study
371(1)
Suggested Computer Activity
372(5)
Mastery Test
372(1)
Problems and Exercises
373(2)
Answers to Mastery Test
375(1)
Answers to Problems and Exercises
375(2)
One-Factor Analysis of Variance
377(45)
Introduction
377(1)
Why Not Several t-Tests?
377(1)
ANOVA Nomenclature
378(1)
ANOVA Computation
379(1)
Sum of Squares Between, SSB
380(1)
Sum of Square Within, SSW
381(1)
ANOVA Computational Illustration
382(1)
ANOVA Theory
382(3)
Mean Square Between Groups, MSW
385(1)
Mean Square Within Groups, MSW
385(1)
The F-Test
386(1)
ANOVA with Equal n's
386(2)
A Statistical Model for the Data
388(1)
Estimates of the Terms in the Model
389(1)
Sums of Squares
389(2)
Restatement of the Null Hypothesis in Terms of Population Means
391(1)
Degrees of Freedom
391(2)
Mean Squares: The Expected Value of MSW
393(1)
The Expected Value of MSB
394(1)
Some Distribution Theory
395(3)
The F-Test of the Null Hypothesis: Rationale and Procedure
398(2)
Type-I versus Type-II Errors: α and β
400(2)
A Summary of Procedures for One-Factor ANOVA
402(1)
Consequences of Failure to Meet the ANOVA Assumptions: The ``Robustness'' of ANOVA,
402(3)
The Welch and Brown-Forsythe Modifications of ANOVA: What Does One Do When σ2's and n's Differ?
405(1)
The Power of the F-Test
406(1)
An Illustration
407(1)
Power When σ is Unknown
408(1)
A Table for Estimating Power When J = 2
409(2)
The Non-Parametric Alternative: The Krukal-Wallis Test
411(1)
Chapter Summary
411(1)
Case Study
412(2)
Suggested Computer Activity
414(8)
Mastery Test
414(2)
Problems and Exercises
416(3)
Answers to Mastery Test
419(1)
Answers to Problems and Exercises
420(2)
Inferences About Variances
422(22)
Introduction
422(1)
Chi-Square Distributions
422(2)
Chi-Square Distributions with v > 1: X22 and X23
424(1)
The Chi-Square Distribution with v Degrees of Freedom, X2v
425(1)
Inferences about the Population Variance: Ho: σ=K
426(2)
F-Distributions
428(2)
Inferences about Two Independent Variances: Ho: σ21 = σ22
430(2)
Testing Homogeneity of Variance: Hartley's Fmax Test
432(2)
Testing Homogeneity of Variance from J Independence Samples: The Bartlett Test
434(2)
Other Tests of Homogeneity of Variance: The Levene and Brown-Forsythe Tests
436(1)
Inferences about Ho: σ12 = σ22 with Paired Observations
437(1)
Relationships among the Normal, t, χ, and F-Distributions
438(1)
Chapter Summary
439(1)
Case Study and Computer Activity
440(4)
Mastery Test
440(1)
Problems and Exercises
441(2)
Answers to Mastery Test
443(1)
Answers to Problems and Exercises
443(1)
Multiple Comparisons and Trend Analysis
444(38)
Introduction
444(2)
Testing All Pairs of Means: The Studentized Range Statistic, q
446(1)
The Tukey Method of Multiple Comparisons
447(2)
The Effect Size of Mean Differences
449(1)
The Basis for Type-I Error Rate: Contrast versus Family
449(1)
The Newman-Keuls Method
450(1)
The Tukey and Newman-Keuls Methods Compared
451(1)
The Definition of a Contrast
452(1)
Simple versus Complex Contrasts
453(1)
The Standard Error of a Contrast
454(1)
The t-ratio for a Contrast
455(1)
Planned versus Post Hoc Comparisons
456(1)
Dunn (Bonferroni) Method of Multiple Comparisons
456(1)
Dunnett Method of Multiple Comparisons
457(1)
Scheffe Method of Multiple Comparisons
458(1)
Planned Orthogonal Contrasts
459(1)
Confidence Intervals for Contrasts
460(1)
Relative Power of Multiple Comparison Techniques
461(1)
Trend Analysis
462(2)
Significance of Trend Components
464(2)
Relation of Trends to Correlation Coefficients
466(1)
Assumptions of MC Methods
467(1)
Multiple Comparisons among Other Statistics
467(2)
Chapter Summary and Criteria for Selecting a Multiple Comparison Method
469(2)
Case Study
471(3)
Suggested Computer Activity
474(8)
Mastery Test
474(2)
Problems
476(1)
Exercises
477(2)
Answers to Mastery Test
479(1)
Answers to Problems
480(1)
Answers to Exercises
480(2)
Two-and Three-Factor ANOVA: an Introduction to Factorial Designs
482(54)
Introduction
482(1)
The Meaning of Interaction
483(2)
Interaction and Generalizability: Factors Do Not Interact
485(1)
Interaction and Generalizability: Factors Interact
486(1)
Interpreting Main Effects when Interaction Is Present
487(2)
Statistical Significance and Interaction
489(1)
Data Layout and Notation
489(2)
A Model for the Data
491(1)
Least-Squares Estimation of the Model
492(2)
Statement of Null Hypotheses
494(3)
Sums of Squares in the Two-Factor ANOVA
497(2)
Degrees of Freedom
499(1)
Mean Squares
500(1)
Illustration of the Computation for the Two-Factor ANOVA
501(2)
Expected Values of Mean Squares
503(3)
The Distribution of the Mean Squares
506(3)
Hypothesis Tests of the Null Hypotheses
509(4)
Determining Power in Factorial Designs
513(1)
Multiple Comparisons in Factorial ANOVA Designs
514(2)
Confidence Intervals for Means in Two-Factor ANOVA
516(1)
Three-Factor ANOVA
517(1)
Three-Factor ANOVA: An Illustration
517(3)
Three-Factor ANOVA Computation
520(1)
The Interpretation of Three-Factor Interactions
521(1)
Confidence Intervals in Three-Factor ANOVA
522(1)
How Factorial Designs Increase Power
522(1)
Factorial ANOVA with Unbalanced Designs
523(1)
Chapter Summary
524(1)
Case Study and Computer Activity
524(12)
Mastery Test
525(2)
Problems and Exercises
527(4)
Answers to Mastery Test
531(1)
Answers to Problems and Exercises
532(4)
Multi-Factor ANOVA Designs: Random, Mixed, and Fixed Effects
536(36)
Introduction
535(1)
The Random-Effects ANOVA Model
535(2)
Assumptions of the Random ANOVA Model
537(1)
An Example
538(1)
Mean Square Within, MSw
539(1)
Mean Square Between, MSBetween
539(1)
The Variance Component, σa2
540(2)
Confidence Interval for σa2/σe2
542(1)
Summary of Random ANOVA Model
543(1)
The Mixed-Effects ANOVA Model
544(3)
Mixed-Model ANOVA Assumptions
547(1)
Mixed-Model ANOVA Computation
547(3)
Multiple Comparisons in the Two-Factor Mixed Model
550(1)
Crossed and Nested Factors
551(2)
Computation of Sums of Squares for Nested Factors
553(1)
Determining the Sources of Variation in the ANOVA Table
554(1)
Degrees of Freedom for Nested Factors
554(1)
Determining Expected Mean Squares
555(1)
Error Mean Squares in Complex ANOVA Designs
556(1)
The Incremental Generalization Strategy: Inferential ``Concentric Circles,''
556(4)
Model Simplification and Pooling
560(1)
The Experimental Unit and the Observational Unit
561(2)
Chapter Summary
563(1)
Case Study/Application
563(9)
Mastery Test
566(3)
Problems and Exercises
569(1)
Answers to Mastery Test
570(1)
Answers to Problems and Exercises
571(1)
Repeated-Measures ANOVA
572(21)
Introduction
572(1)
A Simple Repeated-Measures ANOVA
572(1)
Repeated-Measures Assumptions
573(3)
Trend Analysis on Repeated-Measures Factors
576(1)
Estimating Reliability via Repeated-Measures ANOVA
576(2)
Repeated-Measures Designs with a Between-Subjects Factor
578(2)
Repeated-Measures ANOVA with Two Between-Subjects Factors
580(2)
Trend Analysis on Between-Subjects Factors
582(1)
Repeated-Measures ANOVA with Two Within-Subjects Factors and Two Between-Subjects Factors
582(1)
Repeated-Measures ANOVA versus MANOVA
583(2)
Chapter Summary
585(1)
Case Study
585(2)
Suggested Computer Activity
587(6)
Mastery Test
587(2)
Problems and Exercises
589(2)
Answers to Mastery Test
591(1)
Answers to Problems and Exercises
592(1)
An Introduction to the Analysis of Covariance
593(21)
The Functions of ANCOVA
593(1)
ANOVA Results
594(1)
ANCOVA Model
594(3)
ANCOVA Computations, SStotal
597(1)
The Adjusted Within Sum of Squares, SS'W
597(1)
The Adjusted Sum of Squares Between Groups, SS'B
598(1)
Degrees of Freedom in ANCOVA and the ANCOVA Table
598(2)
Adjusted Means, Y'j
600(1)
Confidence Intervals and Multiple Comparisons for Adjusted Means
601(1)
ANCOVA Illustrated Graphically
602(2)
ANCOVA Assumptions
604(1)
ANCOVA Precautions
605(1)
Covarying versus Stratifying
606(1)
Chapter Summary
607(1)
Case Study
608(1)
Suggested Computer Activity
609(5)
Mastery Test
609(1)
Problems and Exercises
610(2)
Answers to Mastery Test
612(1)
Answers to Problems and Exercises
612(2)
APPENDIX: TABLES 614(37)
Table A: Area of the Unit-Normal (z) Distribution
615(5)
Table B: Random Digits
620(1)
Table C: Percentile Points of the t-Distribution
621(2)
Table D: Percentile Points of Chi-Square Distributions
623(2)
Table E: Fisher's Z-Transformation of r
625(1)
Table F: Critical Values of F
626(8)
Table G: Power Curves for the F-Test
634(3)
Table H: Hartley's Fmax-Distribution
637(1)
Table I: Critical Values of the Studentized Range Statistic: q-Distribution
638(3)
Table J: Critical Values of r
641(1)
Table K: Critical Values of rranks' Spearman's Rank Correlation
642(1)
Table L: Critical t-Ratios for the Dunn (Bonferroni) Method of Multiple Comparison
643(3)
Table M: Critical t-Values for the Dunnett Statistic for Comparing Treatment Means with a Control
646(2)
Table N: Coefficients of Orthogonal Polynomials for Trend Analysis
648(1)
Table O: Binomial Probabilities when &pie; = .5
649(2)
Bibliography 651(14)
Index 665(1)
Author Index 665(2)
Subject Index 667

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program