rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780198943136

A Modern Introduction to Probability and Statistics Understanding Statistical Principles in the Age of the Computer

by Upton, Graham
  • ISBN13:

    9780198943136

  • ISBN10:

    019894313X

  • eBook ISBN(s):

    9780198943143

  • Format: Paperback
  • Copyright: 2025-10-01
  • Publisher: Oxford University Press
  • Purchase Benefits
List Price: $155.00 Save up to $54.01
  • Buy New
    $154.85
    Add to Cart Free Shipping Icon Free Shipping

    NOT YET PRINTED. PLACE AN ORDER AND WE WILL SHIP IT AS SOON AS IT ARRIVES.

Summary

Probability and statistics are subjects fundamental to data analysis, making them essential for efficient artificial intelligence. Although the foundational concepts of probability and statistics remain constant, what needs to be taught is constantly evolving.

The first half of the book introduces probability, conditional probability and the standard probability distributions in the traditional way. The second half considers the power of the modern computer and our reliance on technology to do the calculations for us.

Offering a fresh presentation that builds on the author's previous book, Understanding Statistics, this book includes exercises (with solutions at the rear of the book) and worked examples. Chapters close with a brief mention of the relevant R commands and summary of the content. Increasingly difficult mathematical sections are clearly indicated, and these can be omitted without affecting the understanding of the remaining material.

Aimed at first year graduates, this book is also suitable for readers familiar with mathematical notation.

Author Biography

Graham Upton

Graham Upton is a retired Professor of Applied Statistics, formerly of the Department of Mathematical sciences at The University of Essex. He has published numerous books, including The Oxford Dictionary of Statistics, Data Analysis: A Gentle Introduction for Future Data Scientists, and Understanding Statistics with OUP.

Table of Contents

1Probability1. Probability1.1 Relative Frequency1.2 Preliminary definitions1.3 The probability scale1.4 Probability with equally likely outcomes1.5 The complementary event E'1.6 Venn diagrams1.7 Unions and intersections of events1.8 Mutually exclusive events1.9 Exhaustive events1.10 Probability trees1.11 Sample proportions and probability1.12 Unequally likely possibilities1.13 Physical independence1.14 Orderings1.15 Permutations and combinations1.16 Sampling without replacement1.17 Sampling without replacement2. Conditional Probability2.1 Notation2.2 Statistical independence2.3 Mutual and pairwise independence2.4 The total probability theorem (The partition theorem)2.5 Bayes’ theorem2.6 *The Monty Hall problem3. Probability distributions3.1 Notation3.2 Probability distributions3.3 The discrete uniform distribution3.4 The Bernoulli distribution3.5 The Binomial Distribution3.6 Notation3.7 ‘Successes’ and ‘Failures’3.8 The shape of the binomial distribution3.9 The geometric distribution3.10 The Poisson distribution and the Poisson process3.11 The form of the distribution3.12 Sums of Poisson random variables3.13 The Poisson approximation to the binomial3.14 The negative binomial distribution3.15 The hypergeometric distribution4. Expectations4.1 Expectations of functions4.2 The population variance4.3 Sums of random variables4.4 Mean and variance of common distributions4.5 The expectation and variance of the sample mean5. Continuous random variables5.1 The probability density function (pdf)5.2 The cumulative distribution function, F5.3 Expectations for continuous variables5.4 Obtaining f from F5.5 The uniform (rectangular) distribution5.6 The exponential distribution5.7 *The beta distribution5.8 *The gamma distribution5.9 *Transformation of a random variable6. The Normal Distribution6.1 The general normal distribution6.2 The use of tables6.3 Linear combinations of independent normal random variables6.4 The Central Limit Theorem6.5 The normal distribution used as an approximation6.6 *Proof that the area under the normal curve is 17. Distributions related to the normal distribution7.1 The t distribution7.2 The chi-squared distribution7.3 The F distribution8. *Generating functions8.1 The probability generating function, G8.2 The moment generating function9. *Inequalities and laws9.1 Markov’s inequality9.2 Chebyshev’s inequality9.3 The weak law of large numbers9.4 The strong law of large numbers10. Joint Distributions10.1 Joint probability mass function10.2 Marginal distributions10.3 Conditional distributions10.42Statistics11. Data sources11.1 Data collection by observation11.2 National Consuses11.3 Sampling11.4 Questionnaires11.5 Questionnaire Design12. Summarising data12.1 A single variable12.2 Two variables12.3 More than two variables12.4 Choosing which display to use12.5 Dirty Data13. General Summary Statistics13.1 Measure of location: The mode13.2 Measure of location: The mean13.3 Measure of location: The mean of a frequency distribution13.4 Measure of location: The mean of grouped data13.5 Simplifying calculations13.6 Measure of location: The median13.7 Quantiles13.8 Measures of spread: The Range and Inter-quartile Range13.9 Boxplot13.10 Deviations from the mean13.11 The mean deviation13.12 Measure of spread: The variance13.13 Calculating the variance by hand13.14 Measure of spread: The standard deviation13.15 Variance and standard deviation for frequency distributions13.16 Symmetric and skewed data13.17 Standardising to a prescribed mean and standard deviation13.8 *Calculating the combined mean and variance of several samples13.19 Combining proportions14. Point and interval estimation14.1 Point estimates14.2 Estimation methods14.3 Confidence intervals14.4 Confidence intervals with discrete distributions14.5 One-sided confidence intervals14.6 Confidence intervals for a variance15. Single-sampled hypothesis tests15.1 The null and alternative hypothesis15.2 Critical regions and significance levels15.3 The test procedure15.4 Identifying two hypotheses15.5 Tail probabilities: the p value approach15.6 Hypothesis tests and confidence intervals15.7 Hypothesis tests for a mean15.8 Testing for normality15.9 Hypothesis test for the variance of a normal distribution5.10 Hypothesis tests with discrete distributions5.11 Type I and Type II errors5.12 Hypothesis tests for a proportion based on a small sample15.13 Hypothesis tests for a Poisson mean based on a small sample16. Two samples & paired samples16.1 The comparison of two means16.2 Confidence interval for the difference between two normal means16.3 Paired samples16.4 The comparison of the variances of two normal distributions16.5 Confidence interval for a variance ratio17. Goodness of fit17.1 The chi-squared test17.2 Small expected frequencies17.3 Goodness of fit to prescribed distribution type17.4 Comparing distribution functions17.5 The dispersion test17.6 Contingency tables17.7 The 2 x 2 table: the comparison of two proportions17.8 *Multi-way contingency tables18. Correlation18.1 The product-moment correlation coefficent18.2 Nonsense correlation: storks and goosebury bushes18.3 The ecological fallacy: Immigration and illiteracy18.4 Simpson's paradox: Amputation or injection?18.5 Rank correlation19. Regression19.1 The equation of a straight line19.2 Why 'regression'?19.3 The method of least squares19.4 Transformations, extrapolation and outliers19.5 Properties of the estimators19.6 Analysis of Variance (ANOVA)19.7 Multiple Regression20 *The Bayesian approach

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program