did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780470009598

Variance Components

by ; ;
  • ISBN13:

    9780470009598

  • ISBN10:

    0470009594

  • Edition: 1st
  • Format: Paperback
  • Copyright: 2006-03-24
  • Publisher: Wiley-Interscience
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $166.34 Save up to $0.83
  • Buy New
    $165.51
    Add to Cart Free Shipping Icon Free Shipping

    PRINT ON DEMAND: 2-4 WEEKS. THIS ITEM CANNOT BE CANCELLED OR RETURNED.

Supplemental Materials

What is included with this book?

Summary

WILEY-INTERSCIENCE PAPERBACK SERIES The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. ". . .Variance Components is an excellent book. It is organized and well written, and provides many references to a variety of topics. I recommend it to anyone with interest in linear models." -Journal of the American Statistical Association "This book provides a broad coverage of methods for estimating variance components which appeal to students and research workers . . . The authors make an outstanding contribution to teaching and research in the field of variance component estimation." -Mathematical Reviews "The authors have done an excellent job in collecting materials on a broad range of topics. Readers will indeed gain from using this book . . . I must say that the authors have done a commendable job in their scholarly presentation." -Technometrics This book focuses on summarizing the variability of statistical data known as the analysis of variance table. Penned in a readable style, it provides an up-to-date treatment of research in the area. The book begins with the history of analysis of variance and continues with discussions of balanced data, analysis of variance for unbalanced data, predictions of random variables, hierarchical models and Bayesian estimation, binary and discrete data, and the dispersion mean model.

Author Biography

SHAYLE R. SEARLE, PhD, is Professor Emeritus of Biometry at Cornell University. He is the author of Linear Models, Linear Models for Unbalanced Data, and Matrix Algebra Useful for Statistics, all from Wiley.

GEORGE CASELLA, PhD, is Professor and Chair of the Department of Statistics at the University of Florida. His research interests include decision theory and statistical confidence.

CHARLES E. McCULLOCH, PhD, is Professor of Biostatistics at the University of California, San Francisco. He is the author of numerous scientific publications on biometrics and bio-logical statistics. He is a coauthor, with Shayle R. Searle, of Generalized, Linear, and Mixed Models (Wiley 2001).

Table of Contents

Introduction
1(18)
Factors, levels, cells and effects
2(2)
Balanced and unbalanced data
4(3)
Balanced data
4(1)
Special cases of unbalanced data
4(1)
Planned unbalancedness
4(2)
Estimating missing observations
6(1)
Unbalanced data
6(1)
Fixed effects and random effects
7(8)
Fixed effects models
7(1)
Example 1 (Tomato varieties)
7(1)
Example 2 (Medications)
8(1)
Example 3 (Soils and fertilizers)
9(1)
Random effects models
9(1)
Example 4 (Clinics)
9(3)
Example 5 (Dairy bulls)
12(1)
Example 6 (Ball bearings and calipers)
13(1)
Mixed models
14(1)
Example 7 (Medications and clinics)
14(1)
Example 8 (Varieties and gardens)
14(1)
Fixed or random?
15(1)
Example 9 (Mice and technicians)
15(1)
Finite populations
16(1)
Summary
17(2)
Characteristics of the fixed effects model and the random effects model for the 1-way classification
17(1)
Examples
17(1)
Fixed or random
18(1)
History and Comment
19(25)
Analysis of variance
19(3)
Early years: 1861--1949
22(11)
Sources
22(1)
Pre-1900
23(2)
1900--1939
25(1)
R. A. Fisher
25(2)
L. C. Tippett
27(2)
The late 1930s
29(2)
Unbalanced data
31(1)
The 1940s
32(1)
Great strides: 1950--1969
33(7)
The Henderson methods
34(1)
ANOVA estimation, in general
35(1)
Negative estimates
35(1)
Unbiasedness
36(1)
Best unbiasedness
37(1)
Minimal sufficient statistics
38(1)
Lack of uniqueness
38(2)
Into the 1970s and beyond
40(4)
Maximum likelihood (ML)
40(1)
Restricted maximum likelihood (REML)
41(1)
Minimum norm estimation
41(1)
The dispersion-mean model
42(1)
Bayes estimation
42(1)
The recent decade
43(1)
The 1-Way Classification
44(68)
The model
44(3)
The model equation
44(1)
First moments
45(1)
Second moments
45(2)
Matrix formulation of the model
47(5)
Example 1
47(1)
The general case
48(1)
Dispersion matrices
49(1)
The traditional random model
49(1)
Other alternatives
50(1)
Unbalanced data
51(1)
Example 2
51(1)
The general case
52(1)
Dispersion matrix
52(1)
Estimating the mean
52(2)
Predicting random effects
54(3)
ANOVA estimation---balanced data
57(12)
Expected sums of squares
57(1)
A direct derivation
58(1)
Using the matrix formulation
58(1)
ANOVA estimators
59(1)
Negative estimates
60(2)
Normality assumptions
62(1)
χ2-distributions of sums of squares
62(1)
Independence of sums of squares
63(1)
Sampling variances of estimators
63(1)
An F-statistic to test H: σ2α = 0
64(1)
Confidence intervals
65(1)
Probability of a negative estimate
66(3)
Distribution of estimators
69(1)
ANOVA estimation---unbalanced data
69(9)
Expected sums of squares
69(1)
A direct derivation
69(1)
Using the matrix formulation
70(1)
ANOVA estimators
71(1)
Negative estimates
72(1)
Normality assumptions
73(1)
χ2-distributions of sums of squares
73(1)
Independence of sums of squares
73(1)
Sampling variances of estimators
74(1)
The effect of unbalancedness on sampling variances
75(1)
F-statistics
76(1)
Confidence intervals
76(2)
Maximum likelihood estimation (MLE)
78(12)
Balanced data
79(1)
Likelihood
79(1)
ML equations and their solutions
80(1)
ML estimators
81(3)
Expected values and bias
84(1)
Sampling variances
85(1)
Unbalanced data
86(1)
Likelihood
86(1)
ML equations and their solutions
87(1)
ML estimators
88(1)
Bias
88(1)
Sampling variances
88(2)
Restricted maximum likelihood estimation (REML)
90(4)
Balanced data
91(1)
Likelihood
91(1)
REML equations and their solutions
91(1)
REML estimators
92(1)
Comparison with ANOVA and ML
92(1)
Bias
93(1)
Sampling variances
93(1)
Unbalanced data
93(1)
Bayes estimation
94(9)
A simple sample
94(3)
The 1-way classification, random model
97(2)
Balanced data
99(4)
A summary
103(5)
Balanced data
103(3)
Unbalanced data
106(2)
Exercises
108(4)
Balanced Data
112(56)
Establishing analysis of variance tables
113(3)
Factors and levels
113(1)
Lines in the analysis of variance tables
113(1)
Interactions
113(1)
Degrees of freedom
114(1)
Sums of squares
114(1)
Calculating sums of squares
115(1)
Expected mean squares, E(MS)
116(2)
The 2-way crossed classification
118(10)
Introduction
118(1)
Analysis of variance table
118(1)
Expected mean squares
119(2)
The fixed effects model
121(1)
The random effects model
122(1)
The mixed model
122(1)
A mixed model with Σ-restrictions
123(4)
ANOVA estimators of variance components
127(1)
ANOVA estimation
128(3)
Normality assumptions
131(7)
Distribution of mean squares
131(1)
Distribution of estimators
132(1)
Tests of hypotheses
133(2)
Confidence intervals
135(2)
Probability of a negative estimate
137(1)
Sampling variances of estimators
137(1)
A matrix formulation of mixed models
138(8)
The general mixed model
138(2)
The 2-way crossed classification
140(1)
Model equation
140(2)
Random or mixed?
142(1)
Dispersion matrix
142(1)
The 2-way nested classification
142(1)
Interaction or nested factor?
143(1)
The general case
143(1)
Model equation
144(1)
Dispersion matrix
144(2)
Maximum likelihood estimation (ML)
146(12)
Estimating the mean in random models
146(1)
Four models with closed form estimators
147(1)
The 1-way random model
147(1)
The 2-way nested random model
148(1)
The 2-way crossed, with interaction, mixed model
149(1)
The 2-way crossed, no interaction, mixed model
150(1)
Unbiasedness
151(1)
The 2-way crossed classification, random model
151(1)
With interaction
152(1)
No interaction
153(1)
Existence of explicit solutions
153(1)
Asymptotic sampling variances for the 2-way crossed classification
154(1)
The 2-way crossed classification, with interaction, random model
155(1)
The 2-way crossed classification, no interaction, random model
155(1)
The 2-way crossed classification, with interaction, mixed model
156(1)
The 2-way crossed classification, no interaction, mixed model
156(1)
Asymptotic sampling variances for two other models
157(1)
The 2-way nested classification, random model
157(1)
The 1-way classification, random model
158(1)
Locating results
158(1)
Restricted maximum likelihood
158(1)
Estimating fixed effects in mixed models
159(2)
Summary
161(2)
Exercises
163(5)
Analysis of Variance Estimation for Unbalanced Data
168(64)
Model formulation
169(3)
Data
169(1)
A general model
170(1)
Example 1---the 2-way crossed classification random model
170(1)
Dispersion matrix
171(1)
Example 1 (continued)
172(1)
ANOVA estimation
172(9)
Example 2---the 1-way random model, balanced data
172(1)
Estimation
172(1)
The general case
172(1)
Example 2 (continued)
173(1)
Example 1 (continued)
173(2)
Unbiasedness
175(1)
Sampling variances
176(1)
A general result
176(1)
Example 2 (continued)
177(1)
A direct approach
178(1)
Unbiased estimation of sampling variances
179(2)
Henderson's Method I
181(9)
The quadratic forms
181(2)
Estimation
183(1)
Negative estimates
184(1)
Sampling variances
184(2)
A general coefficient for Method I
186(1)
Synthesis
187(1)
Mixed models
188(1)
Merits and demerits
189(1)
A numerical example
189(1)
Henderson's Method II
190(11)
Estimating the fixed effects
191(1)
Calculation
192(2)
Verification
194(1)
The matrix XLZ is null
194(1)
Row sums of XL are all the same
195(1)
All rows of X---XLX are the same
195(1)
Invariance
196(1)
Rank properties
196(1)
An estimable function
197(1)
Two solutions
198(1)
The quadratic forms
198(1)
Coefficients of σ2&e
198(1)
No fixed-by-random interactions
199(2)
Merits and demerits
201(7)
Henderson's Method III
202(1)
Borrowing from fixed effects models
202(1)
Reductions in sum of squares
202(1)
Expected sums of squares
203(1)
Mixed models
204(2)
A general result
206(1)
Sampling variances
207(1)
Merits and demerits
208(1)
Method III applied to the 2-way crossed classification
208(10)
No interaction, random model
209(1)
One set of sums of squares
209(1)
Three sets of sums of squares
210(3)
Calculation
213(1)
Sampling variances
213(1)
No interaction, mixed model
213(1)
With interaction, random model
213(1)
One set of sums of squares
214(1)
Three sets of sums of squares
215(2)
Calculation
217(1)
Sampling variances
217(1)
With interaction, mixed models
218(1)
Nested models
218(1)
Other forms of ANOVA estimation
219(2)
Unweighted means method: all cells filled
219(1)
Weighted squares of means: all cells filled
220(1)
Comparing different forms of ANOVA estimation
221(4)
Estimating fixed effects in mixed models
225(1)
Summary
226(1)
Exercises
227(5)
Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML)
232(26)
The model and likelihood function
233(1)
The ML estimation equations
234(4)
A direct derivation
234(2)
An alternative form
236(1)
The Hartley--Rao form
237(1)
Asymptotic dispersion matrices for ML estimators
238(4)
For variance components
238(2)
For ratios of components
240(1)
Maximum?
241(1)
Some remarks on computing
242(1)
ML results for 2-way crossed classification, balanced data
243(6)
2-way crossed, random model, with interaction
243(1)
Notation
243(1)
Inverse of V
244(1)
The estimation equations
245(2)
Information matrix
247(2)
2-way crossed, random model, no interaction
249(1)
Restricted maximum likelihood (REML)
249(5)
Linear combinations of observations
250(1)
The REML equations
251(1)
An alternative form
251(1)
Invariance to choice of error contrasts
252(1)
The information matrix
252(1)
Balanced data
253(1)
Using cell means models for the fixed effects
253(1)
Estimating fixed effects in mixed models
254(1)
ML
254(1)
REML
254(1)
ML or REML?
254(1)
Summary
255(1)
Exercises
256(2)
Prediction of Random Variables
258(32)
Introduction
258(3)
Best prediction (BP)
261(4)
The best predictor
261(1)
Mean and variance properties
262(1)
Two properties of the best predictor of a scalar
263(1)
Maximizing a correlation
264(1)
Maximizing the mean of a selected proportion
264(1)
Normality
265(1)
Best linear prediction (BLP)
265(4)
BLP(u)
265(1)
Example
266(1)
Derivation
267(1)
Ranking
268(1)
Mixed model prediction (BLUP)
269(4)
Combining fixed and random effects
269(1)
Example
270(1)
Derivation of BLUP
271(1)
Variances and covariances
272(1)
Normality
273(1)
Other derivations of BLUP
273(2)
A two-stage derivation
273(1)
A direct derivation assuming linearity
274(1)
Partitioning y into two parts
274(1)
A Bayes estimator
275(1)
Henderson's mixed model equations (MME)
275(11)
Derivation
275(1)
Solutions
276(2)
Calculations for ML estimation
278(1)
The estimation equations
278(2)
The information matrix
280(2)
Calculations for REML estimation
282(1)
The estimation equations
282(2)
The information matrix
284(1)
Iterative procedures summarized
284(1)
Adapting the MME
284(1)
Using the MME
284(1)
Iterating for ML
285(1)
Iterating for REML
285(1)
A summary
286(1)
Summary
286(1)
Exercises
287(3)
Computing ML and REML estimates
290(25)
Introduction
290(2)
Iterative methods based on derivatives
292(5)
The basis of the methods
292(1)
The Newton--Raphson and Marquardt methods
293(2)
Method of scoring
295(1)
Quasi-Newton methods
295(1)
Obtaining starting values
295(1)
Termination rules
296(1)
Incorporation of non-negativity constraints
296(1)
Easing the computational burden
296(1)
The EM algorithm
297(7)
A general formulation
297(1)
Distributional derivations needed for the EM algorithm
298(2)
EM algorithm for ML estimation (Version 1)
300(1)
EM algorithm for ML estimation (Version 2)
301(1)
Equivalence of the EM algorithm to the ML equations
302(1)
EM algorithm for REML estimation
302(1)
A Bayesian justification for REML
303(1)
Non-zero correlations among the u, s
304(1)
General methods that converge rapidly for balanced data
304(1)
Pooling estimators from subsets of a large data set
305(2)
Example: the 1-way random model
307(4)
The EM algorithm (Version 1)
308(2)
The method of scoring algorithm
310(1)
Discussion
311(2)
Computing packages
311(1)
Evaluation of algorithms
312(1)
Summary
313(1)
Exercises
314(1)
Hierarchical Models and Bayesian Estimation
315(52)
Basic principles
315(6)
Introduction
315(1)
Simple examples
316(1)
The mixed model hierarchy
317(1)
The normal hierarchy
318(1)
Point estimator of variance or variance of point estimator
319(2)
Variance estimation in the normal hierarchy
321(6)
Formal hierarchical estimation
321(1)
Likelihood methods
321(4)
Empirical Bayes estimation
325(1)
General strategies
325(1)
Estimation
325(1)
Connections with likelihood
326(1)
Estimation of effects
327(16)
Hierarchical estimation
327(1)
Estimation of β
328(2)
Estimation of u
330(1)
An alternative derivation
331(1)
Exploiting the multivariate normal structure
331(2)
Relationship to BLUP
333(1)
The 1-way classification, random model
333(1)
Estimation of μ
334(1)
Estimation of α
335(2)
Empirical Bayes estimation
337(1)
The 1-way classification
337(1)
Cautions
338(3)
Variance approximations
341(2)
Other types of hierarchies
343(6)
A beta-binomial model
344(3)
A generalized linear model
347(2)
Practical considerations in hierarchical modeling
349(3)
Computational problems
349(1)
Hierarchical EM
350(2)
Philosophical considerations in hierarchical modeling
352(3)
Summary
355(4)
Exercises
359(8)
Binary and Discrete Data
367(11)
Introduction
367(2)
ANOVA methods
369(1)
Beta--binomial models
369(3)
Introduction
369(1)
Model specification
370(1)
Likelihood
371(1)
Discussion
371(1)
Logit-normal models
372(1)
Probit-normal models
373(1)
Introduction
373(1)
An example
373(1)
Discussion
374(1)
Summary
375(1)
Exercises
376(2)
Other Procedures
378(27)
Estimating components of covariance
378(9)
Easy ANOVA estimation for certain models
379(1)
Examples of covariance components models
380(1)
Covariances between effects of the same random factor
381(1)
Covariances between effects of different random factors
381(1)
Covariances between error terms
382(1)
Combining variables into a single vector
382(1)
Genetic covariances
383(1)
Maximum likelihood (ML) estimation
383(1)
Estimation equations
383(2)
Large-sample dispersion matrix
385(1)
Restricted maximum likelihood (REML) estimation
386(1)
Estimation equations
386(1)
Large-sample dispersion matrix
387(1)
Modeling variance components as covariances
387(4)
All-cells-filled data
388(1)
Balanced data
389(1)
Diagnostic opportunities
389(1)
Some-cells-empty data
390(1)
Criteria-based procedures
391(9)
Three criteria
392(1)
Unbiasedness
392(1)
Translation invariance
392(1)
Minimum variance
393(1)
LaMotte's minimum mean square procedures
393(1)
Class C0: unrestricted
393(1)
Class C1: expectation of y'Ay containing no β
394(1)
Class C2: translation-invariant
394(1)
Class C3: unbiased
394(1)
Class C4: translation-invariant and unbiased
394(1)
Minimum variance estimation (MINV AR)
394(3)
Minimum norm estimation (MINQUE)
397(1)
REML, MINQUE and I-MINQUE
398(1)
REML for balanced data
399(1)
MINQUEO
399(1)
MINQUE for the 1-way classification
399(1)
Summary
400(2)
Exercises
402(3)
The Dispersion-Mean Model
405(22)
The model
405(1)
Ordinary least squares (OLS) yields MINQUEO
406(1)
Fourth moments in the mixed model
407(6)
Dispersion matrix of u x u
407(1)
A normalizing transformation
407(1)
Example
408(2)
The general form of E(ww' x ww')
410(1)
Fourth central moments of y
411(1)
General case
411(1)
Under normality
411(1)
Dispersion matrix of y
411(1)
General case
411(1)
Under normality
412(1)
Variance of a translation-invariant quadratic form
412(1)
Generalized least squares (GLS)
413(3)
GLS yields REML equations under normality
413(2)
Excursus on estimating fixed effects
415(1)
REML is BLUE
415(1)
Modified GLS yields ML
416(1)
Balanced data
417(7)
Estimation under zero kurtosis
417(1)
History
417(2)
The model
419(2)
Conclusion
421(1)
Estimation under non-zero kurtosis
421(1)
The model
421(1)
ANOVA estimation
422(2)
Conclusion
424(1)
Non-negative estimation
424(1)
Summary
425(1)
Exercises
426(1)
Appendix F. Estimation Formulae for Unbalanced Data
427(1)
PART I. THREE NESTED MODELS
427(7)
The 1-way classification
427(2)
Model
427(1)
Analysis of variance estimators
428(1)
Variances of analysis of variance estimators (under normality)
428(1)
Maximum likelihood estimation (under normality)
428(1)
Large-sample variances of maximum likelihood estimators (under normality)
428(1)
The 2-way nested classification
429(2)
Model
429(1)
Analysis of variance estimators
429(1)
Variances of analysis of variance estimators (under normality)
429(1)
Large-sample variances of maximum likelihood estimators (under normality)
430(1)
The 3-way nested classification
431(3)
Model
431(1)
Analysis of variance estimators
431(1)
Variances of analysis of variance estimators (under normality)
432(2)
PART II. THE 2-WAY CROSSED CLASSIFICATION
434(8)
With interaction, random model
434(4)
Model
434(1)
Henderson Method I estimators
434(1)
Variances of Henderson Method I estimators (under normality)
435(2)
Henderson Method III estimators
437(1)
With interaction, mixed model
438(1)
Model
438(1)
Henderson Method III
438(1)
No interaction, random model
439(2)
Model
439(1)
Henderson Method I
439(1)
Variances of Henderson Method I estimators (under normality)
439(1)
Henderson Method III
440(1)
Variances of Henderson Method III estimators (under normality)
440(1)
No interaction, mixed model
441(1)
Model
441(1)
Henderson Method III
441(1)
Appendix M. Some Results in Matrix Algebra
442(19)
Summing vectors, and J-matrices
442(1)
Direct sums and products
443(2)
A matrix notation in terms of elements
445(2)
Generalized inverses
447(6)
Definitions
447(1)
Generalized inverses of X'X
448(2)
Partitioning X'X
450(1)
Rank results
451(1)
Vectors orthogonal to columns of X
451(1)
A theorem involving K' of maximum row rank for K'X being null
451(2)
The Schur complement
453(1)
The trace of a matrix
454(1)
Differentiation of matrix expressions
454(4)
Scalars
454(1)
Vectors
455(1)
Inner products
455(1)
Quadratic forms
455(1)
Inverses
456(1)
Determinants
456(1)
Traces
457(1)
The operators vec and vech
458(1)
Vec permutation matrices
459(1)
The equality VV - X = X
460(1)
Appendix S. Some Results in Statistics
461(14)
Conditional first and second moments
461(1)
Least squares estimation
462(1)
Normal and χ2-distributions
463(2)
Central χ2
464(1)
Mean squares
464(1)
Non-central χ2
465(1)
F-distributions
465(1)
Quadratic forms
466(1)
Bayes estimation
467(5)
Density functions
467(1)
Bayes Theorem
468(1)
Bayes estimation
469(1)
Example
469(2)
Empirical Bayes estimation
471(1)
Maximum likelihood
472(3)
The likelihood function
472(1)
Maximum likelihood estimation
472(1)
Asymptotic dispersion matrix
473(1)
Transforming parameters
474(1)
References 475(15)
List of Tables and Figures 490(3)
Author Index 493(4)
Subject Index 497

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program