did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780387953649

Model Selection and Multimodel Inference

by ;
  • ISBN13:

    9780387953649

  • ISBN10:

    0387953647

  • Edition: 2nd
  • Format: Hardcover
  • Copyright: 2002-07-01
  • Publisher: Springer Nature
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $149.99 Save up to $86.20
  • Digital
    $138.21
    Add to Cart

    DURATION
    PRICE

Supplemental Materials

What is included with this book?

Summary

The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is presented for model-based data analysis and a general strategy outlined for the analysis of empirical data. The book invites increased attention on a priori science hypotheses and modeling.Kullback-Leibler Information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection. The maximized log-likelihood function can be bias-corrected as an estimator of expected, relative Kullback-Leibler information. This leads to Akaike's Information Criterion (AIC) and various extensions. These methods are relatively simple and easy to use in practice, but based on deep statistical theory. The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are objective and practical to employ across a very wide class of empirical problems.The book presents several new ways to incorporate model selection uncertainty into parameter estimates and estimates of precision. An array of challenging examples is given to illustrate various technical issues.This is an applied book written primarily for biologists and statisticians wanting to make inferences from multiple models and is suitable as a graduate text or as a reference for professional analysts.

Author Biography

Dr. Kenneth P. Burnham (a statistician) has applied and developed statistical theory for thirty years in several areas of life sciences, especially ecology and wildlife. He is the recipient of numerous professional awards, including the Distinguished Achievement Medal from the American Statistical Association, Section on Statistics and the Environment, and the Distinguished Statistical Ecologist Award from INTECOL (International Congress of Ecology). Dr. Burnham is a fellow of the American Statistical Association Dr. David R. Anderson is a senior scientist with the Biological Resources Division within the U.S. Geological Survey and a professor in the Department of Fishery and Wildlife Biology, Colorado State University. He is the recipient of numerous professional awards for scientific and academic contributions, including the Meritorious Service Award given by the U.S. Department of the Interior

Table of Contents

Preface vii
About the Authors xxi
Glossary xxiii
Introduction
1(48)
Objectives of the Book
1(4)
Background Material
5(18)
Inference from Data, Given a Model
5(1)
Likelihood and Least Squares Theory
6(7)
The Critical Issue: ``What Is the Best Model to Use?''
13(2)
Science Inputs: Formulation of the Set of Candidate Models
15(5)
Models Versus Full Reality
20(2)
An Ideal Approximating Model
22(1)
Model Fundamentals and Notation
23(6)
Truth or Full Reality f
23(1)
Approximating Models gi(x|θ)
23(2)
The Kullback-Leibler Best Model gi(x|θo)
25(1)
Estimated Models gi(x|θ)
25(1)
Generating Models
26(1)
Global Model
26(1)
Overview of Stochastic Models in the Biological Sciences
27(2)
Inference and the Principle of Parsimony
29(8)
Avoid Overfitting to Achieve a Good Model Fit
29(2)
The Principle of Parsimony
31(4)
Model Selection Methods
35(2)
Data Dredging, Overanalysis of Data, and Spurious Effects
37(6)
Overanalysis of Data
38(2)
Some Trends
40(3)
Model Selection Bias
43(2)
Model Selection Uncertainty
45(2)
Summary
47(2)
Information and Likelihood Theory: A Basis for Model Selection and Inference
49(49)
Kullback-Leibler Information or Distance Between Two Models
50(10)
Examples of Kullback-Leibler Distance
54(4)
Truth, f, Drops Out as a Constant
58(2)
Akaike's Information Criterion: 1973
60(5)
Takeuchi's Information Criterion: 1976
65(1)
Second-Order Information Criterion: 1978
66(1)
Modification of Information Criterion for Overdispersed Count Data
67(3)
AIC Differences, Δi
70(2)
A Useful Analogy
72(2)
Likelihood of a Model, L*(gi|data)
74(1)
Akaike Weights, wi
75(2)
Basic Formula
75(1)
An Extension
76(1)
Evidence Ratios
77(3)
Important Analysis Details
80(5)
AIC Cannot Be Used to Compare Models of Different Data Sets
80(1)
Order Not Important in Computing AIC Values
81(1)
Transformations of the Response Variable
81(1)
Regression Models with Differing Error Structures
82(1)
Do Not Mix Null Hypothesis Testing with Information-Theoretic Criteria
83(1)
Null Hypothesis Testing Is Still Important in Strict Experiments
83(1)
Information-Theoretic Criteria Are Not a ``Test''
84(1)
Exploratory Data Analysis
84(1)
Some History and Further Insights
85(5)
Entropy
86(1)
A Heuristic Interpretation
87(1)
More on Interpreting Information-Theoretic Criteria
87(1)
Nonnested Models
88(1)
Further Insights
89(1)
Bootstrap Methods and Model Selection Frequencies πi
90(4)
Introduction
91(2)
The Bootstrap in Model Selection: The Basic Idea
93(1)
Return to Flather's Models
94(2)
Summary
96(2)
Basic Use of the Information-Theoretic Approach
98(51)
Introduction
98(2)
Example 1: Cement Hardening Data
100(6)
Set of Candidate Models
101(1)
Some Results and Comparisons
102(4)
A Summary
106(1)
Example 2: Time Distribution of an Insecticide Added to a Simulated Ecosystem
106(5)
Set of Candidate Models
108(2)
Some Results
110(1)
Example 3: Nestling Starlings
111(15)
Experimental Scenario
112(1)
Monte Carlo Data
113(1)
Set of Candidate Models
113(4)
Data Analysis Results
117(3)
Further Insights into the First Fourteen Nested Models
120(1)
Hypothesis Testing and Information-Theoretic Approaches Have Different Selection Frequencies
121(3)
Further Insights Following Final Model Selection
124(1)
Why Not Always Use the Global Model for Inference?
125(1)
Example 4: Sage Grouse Survival
126(11)
Introduction
126(1)
Set of Candidate Models
127(2)
Model Selection
129(2)
Hypothesis Tests for Year-Dependent Survival Probabilities
131(1)
Hypothesis Testing Versus AIC in Model Selection
132(2)
A Class of Intermediate Models
134(3)
Example 5: Resource Utilization of Anolis Lizards
137(4)
Set of Candidate Models
138(1)
Comments on Analytic Method
138(1)
Some Tentative Results
139(2)
Example 6: Sakamoto et al.'s (1986) Simulated Data
141(1)
Example 7: Models of Fish Growth
142(1)
Summary
143(6)
Formal Inference From More Than One Model: Multimodel Inference (MMI)
149(57)
Introduction to Multimodel Inference
149(1)
Model Averaging
150(3)
Prediction
150(1)
Averaging Across Model Parameters
151(2)
Model Selection Uncertainty
153(14)
Concepts of Parameter Estimation and Model Selection Uncertainty
155(3)
Including Model Selection Uncertainty in Estimator Sampling Variance
158(6)
Unconditional Confidence Intervals
164(3)
Estimating the Relative Importance of Variables
167(2)
Confidence Set for the K-L Best Model
169(4)
Introduction
169(2)
Δi, Model Selection Probabilities, and the Bootstrap
171(2)
Model Redundancy
173(3)
Recommendations
176(1)
Cement Data
177(6)
Pine Wood Data
183(4)
The Durban Storm Data
187(8)
Models Considered
188(2)
Consideration of Model Fit
190(1)
Confidence Intervals on Predicted Storm Probability
191(2)
Comparisons of Estimator Precision
193(2)
Flour Beetle Mortality: A Logistic Regression Example
195(6)
Publication of Research Results
201(2)
Summary
203(3)
Monte Carlo Insights and Extended Examples
206(61)
Introduction
206(1)
Survival Models
207(17)
A Chain Binomial Survival Model
207(3)
An Example
210(5)
An Extended Survival Model
215(4)
Model Selection if Sample Size Is Huge, or Truth Known
219(2)
A Further Chain Binomial Model
221(3)
Examples and Ideas Illustrated with Linear Regression
224(31)
All-Subsets Selection: A GPA Example
225(4)
A Monte Carlo Extension of the GPA Example
229(6)
An Improved Set of GPA Prediction Models
235(3)
More Monte Carlo Results
238(6)
Linear Regression and Variable Selection
244(4)
Discussion
248(7)
Estimation of Density from Line Transect Sampling
255(9)
Density Estimation Background
255(1)
Line Transect Sampling of Kangaroos at Wallaby Creek
256(1)
Analysis of Wallaby Creek Data
256(2)
Bootstrap Analysis
258(1)
Confidence Interval on D
258(2)
Bootstrap Samples: 1,000 Versus 10,000
260(1)
Bootstrap Versus Akaike Weights: A Lesson on QAICc
261(3)
Summary
264(3)
Advanced Issues and Deeper Insights
267(85)
Introduction
267(1)
An Example with 13 Predictor Variables and 8,191 Models
268(16)
Body Fat Data
268(1)
The Global Model
269(1)
Classical Stepwise Selection
269(2)
Model Selection Uncertainty for AICc and BIC
271(3)
An A Priori Approach
274(2)
Bootstrap Evaluation of Model Uncertainty
276(3)
Monte Carlo Simulations
279(2)
Summary Messages
281(3)
Overview of Model Selection Criteria
284(9)
Criteria That Are Estimates of K-L Information
284(2)
Criteria That Are Consistent for K
286(2)
Contrasts
288(1)
Consistent Selection in Practice: Quasi-true Models
289(4)
Contrasting AIC and BIC
293(12)
A Heuristic Derivation of BIC
293(2)
A K-L-Based Conceptual Comparison of AIC and BIC
295(3)
Performance Comparison
298(3)
Exact Bayesian Model Selection Formulas
301(1)
Akaike Weights as Bayesian Posterior Model Probabilities
302(3)
Goodness-of-Fit and Overdispersion Revisited
305(5)
Overdispersion c and Goodness-of-Fit: A General Strategy
305(2)
Overdispersion Modeling: More Than One c
307(2)
Model Goodness-of Fit After Selection
309(1)
AIC and Random Coefficient Models
310(7)
Basic Concepts and Marginal Likelihood Approach
310(3)
A Shrinkage Approach to AIC and Random Effects
313(3)
On Extensions
316(1)
Selection When Probability Distributions Differ by Model
317(6)
Keep All the Parts
317(1)
A Normal Versus Log-Normal Example
318(2)
Comparing Across Several Distributions: An Example
320(3)
Lessons from the Literature and Other Matters
323(11)
Use AICc, Not AIC, with Small Sample Sizes
323(2)
Use AICc, Not AIC, When K Is Large
325(1)
When Is AIC Suitable: A Gamma Distribution Example
326(2)
Inference from a Less Than Best Model
328(2)
Are Parameters Real?
330(2)
Sample Size Is Often Not a Simple Issue
332(1)
Judgment Has a Role
333(1)
Tidbits About AIC
334(13)
Irrelevance of Between-Sample Variation of AIC
334(2)
The G-Statistic and K-L Information
336(1)
AIC Versus Hypothesis Testing: Results Can Be Very Different
337(2)
A Subtle Model Selection Bias Issue
339(1)
The Dimensional Unit of AIC
340(2)
AIC and Finite Mixture Models
342(2)
Unconditional Variance
344(1)
A Baseline for w+(i)
345(2)
Summary
347(5)
Statistical Theory and Numerical Results
352(85)
Useful Preliminaries
352(10)
A General Derivation of AIC
362(9)
General K-L-Based Model Selection: TIC
371(3)
Analytical Computation of TIC
371(1)
Bootstrap Estimation of TIC
372(2)
AICc: A Second-Order Improvement
374(6)
Derivation of AICc
374(5)
Lack of Uniqueness of AICc
379(1)
Derivation of AIC for the Exponential Family of Distributions
380(4)
Evaluation of tr(J(θ-0)[I(θ-0)]-1) and Its Estimator
384(28)
Comparison of AIC Versus TIC in a Very Simple Setting
385(5)
Evaluation Under Logistic Regression
390(7)
Evaluation Under Multinomially Distributed Count Data
397(8)
Evaluation Under Poisson-Distributed Data
405(1)
Evaluation for Fixed-Effects Normality-Based Linear Models
406(6)
Additional Results and Considerations
412(17)
Selection Simulation for Nested Models
412(3)
Simulation of the Distribution of Δp
415(2)
Does AIC Overfit?
417(2)
Can Selection Be Improved Based on All the Δi?
419(2)
Linear Regression, AIC, and Mean Square Error
421(3)
AICc and Models for Multivariate Data
424(2)
There Is No True TICc
426(1)
Kullback-Leibler Information Relationship to the Fisher Information Matrix
426(1)
Entropy and Jaynes Maxent Principle
427(1)
Akaike Weights wi Versus Selection Probabilities πi
428(1)
Kullback-Leibler Information Is Always ≥ 0
429(5)
Summary
434(3)
Summary
437(18)
The Scientific Question and the Collection of Data
439(1)
Actual Thinking and A Priori Modeling
440(2)
The Basis for Objective Model Selection
442(1)
The Principle of Parsimony
443(1)
Information Criteria as Estimates of Expected Relative Kullback-Leibler Information
444(2)
Ranking Alternative Models
446(1)
Scaling Alternative Models
447(1)
MMI: Inference Based on Model Averaging
448(1)
MMI: Model Selection Uncertainty
449(2)
MMI: Relative Importance of Predictor Variables
451(1)
More on Inferences
451(3)
Final Thoughts
454(1)
References 455(30)
Index 485

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program