This book is in two volumes, and is intended as a text for introductory courses in probability and statistics at the second or third year university level. It em phasizes applications and logical principles rather than mathematical theory. A good background in freshman calculus is sufficient for most of the material presented. Several starred sections have been included as supplementary material. Nearly 900 problems and exercises of varying difficulty are given, and Appendix A contains answers to about one-third of them. The first volume (Chapters 1-8) deals with probability models and with math ematical methods for describing and manipulating them. It is similar in content and organization to the 1979 edition. Some sections have been rewritten and expanded-for example, the discussions of independent random variables and conditional probability. Many new exercises have been added. In the second volume (Chapters 9-16), probability models are used as the basis for the analysis and interpretation of data. This material has been revised extensively. Chapters 9 and 10 describe the use of the likelihood function in estimation problems, as in the 1979 edition. Chapter 11 then discusses frequency properties of estimation procedures, and introduces coverage probability and confidence intervals. Chapter 12 describes tests of significance, with applications primarily to frequency data. The likelihood ratio statistic is used to unify the material on testing, and connect it with earlier material on estimation.
of Volume2.- 9 Likelihood Methods.- 9.1 The Method of Maximum Likelihood.- 9.2 Combining Independent Experiments.- 9.3 Relative Likelihood.- 9.4 Likelihood for Continuous Models.- 9.5 Censoring in Lifetime Experiments.- 9.6 Invariance.- 9.7 Normal Approximations.- 9.8 Newton’s Method.- Review Problems.- 10 Two-Parameter Likelihoods.- 10.1 Maximum Likelihood Estimation.- 10.2 Relative Likelihood and Contour Maps.- 10.3 Maximum Relative Likelihood.- 10.4 Normal Approximations.- 10.5 A Dose-Response Example.- 10.6 An Example from Learning Theory.- 10.7* Some Derivations.- 10.8* Multi-Parameter Likelihoods.- 11 Frequency Properties.- 11.1 Sampling Distributions.- 11.2 Coverage Probability.- 11.3 Chi-Square Approximations.- 11.4 Confidence Intervals.- 11.5 Results for Two-Parameter Models.- 11.6* Expected Information and Planning Experiments.- 11.7* Bias.- 12 Tests of Significance.- 12.1 Introduction.- 12.2 Likelihood Ratio Tests for Simple Hypotheses.- 12.3 Likelihood Ratio Tests for Composite Hypotheses.- 12.4 Tests for Binomial Probabilities.- 12.5 Tests for Multinomial Probabilities.- 12.6 Tests for Independence in Contingency Tables.- 12.7 Cause and Effect.- 12.8 Testing for Marginal Homogeneity.- 12.9 Significance Regions.- 12.10* Power.- 13 Analysis of Normal Measurements.- 13.1 Introduction.- 13.2 Statistical Methods.- 13.3 The One-Sample Model.- 13.4 The Two-Sample Model.- 13.5 The Straight Line Model.- 13.6 The Straight Line Model (Continued).- 13.7 Analysis of Paired Measurements.- Review Problems.- 14 Normal Linear Models.- 14.1 Matrix Notation.- 14.2 Parameter Estimates.- 14.3 Testing Hypotheses in Linear Models.- 14.4 More on Tests and Confidence Intervals.- 14.5 Checking the Model.- 14.6* Derivations.- 15 Sufficient Statistics and Conditional Tests.- 15.1 The Sufficiency Principle.- 15.2 Properties of Sufficient Statistics.- 15.3 Exact Significance Levels and Coverage Probabilities.- 15.4 Choosing the Reference Set.- 15.5 Conditional Tests for Composite Hypotheses.- 15.6 Some Examples of Conditional Tests.- 16 Topics in Statistical Inference.- 16.1* The Fiducial Argument.- 16.2* Bayesian Methods.- 16.3* Prediction.- 16.4* Inferences from Predictive Distributions.- 16.5* Testing a True Hypothesis.- Appendix A.- Answers to Selected Problems.- Appendix B.- Tables.