Original Japanese edition published by Kyoritsu Shuppan Co., Ltd., Tokyo, 2012
2014, XII, 300 p. 88 illus.
Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.
You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.
After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.
The standard approach of most introductory books for practical statistics is that readers first learn the minimum mathematical basics of statistics and rudimentary concepts of statistical methodology. They then are given examples of analyses of data obtained from natural and social phenomena so that they can grasp practical definitions of statistical methods. Finally they go on to acquaint themselves with statistical software for the PC and analyze similar data to expand and deepen their understanding of statistical methods.
This book, however, takes a slightly different approach, using simulation data instead of actual data to illustrate the functions of statistical methods. Also, "R" programs listed in the book help readers realize clearly how these methods work to bring intrinsic values of data to the surface. "R" is free software enabling users to handle vectors, matrices, data frames, and so on.
For example, when a statistical theory indicates that an event happens with a 5 % probability, readers can confirm the fact using "R" programs that this event actually occurs with roughly that probability, by handling data generated by pseudo-random numbers. Simulation gives readers populations with known backgrounds and the nature of the population can be adjusted easily. This feature of the simulation data helps provide a clear picture of statistical methods painlessly.
Most readers of introductory books of statistics for practical purposes do not like complex mathematical formulae, but they do not mind using a PC to produce various numbers and graphs by handling a huge variety of numbers. If they know the characteristics of these numbers beforehand, they treat them with ease. Struggling with actual data should come later. Conventional books on this topic frighten readers by presenting unidentified data to them indiscriminately. This book provides a new path to statistical concepts and practical skills in a readily accessible manner.
Content Level »Graduate
Keywords »Akaike's Information Criterion (AIC) - basic concepts of linear algebra - basic concepts of statistics - linear mixed-effects model - simulations using R programs
Chapter 1 Linear algebra. Starting up and executing R. Vectors. Matrices. Addition of two matrices. Multiplying two matrices. Identity and inverse matrices. Simultaneous equations. Diagonalization of a symmetric matrix. Quadratic forms.– Chapter 2 Distributions and tests. Sampling and random variables. Probability distribution. Normal distribution and the central limit theorem. Interval estimation by t distribution. t-test. Intervalestimation of population variance and the χ2 distribution. Fdistribution and F-test. Wilcoxon signed-rank sum test.– Chapter 3 Simple regression. Derivation of regression coefficients. Exchange between predictor variable and target variable. Regression to the mean. Confidence interval of regression coefficients in simple regression. t-Test in simple regression. F-teston simple regression. Selection between constant and nonconstant regression equations. Prediction error of simple regression. Weighted regression. Least squares method and prediction error.– Chapter 4 Multiple regression. Derivation of regression coefficients. Test on multiple regression. Prediction error on multiple regression. Notes on model selection using prediction error. Polynomial regression. Variance of regression coefficient and multicollinearity. Detection of multicollinearity using Variance Inflation Factors. Hessian matrix of log-likelihood.– Chapter 5 Akaike's Information Criterion (AIC) and the third variance. Cp and FPE. AIC of a multiple regression equation with independent and identical normal distribution. Derivation of AIC for multiple regression. AIC with unbiased estimator for error variance. Error variance by maximizing expectation of log-likelihood in light of the data in the future and the “third variance.” Relationship between AIC (or GCV) and F-test. AIC on Poisson regression.– Chapter 6 Linear mixed model. Random-effects model. Random intercept model. Random intercept and slope model. Generalized linear mixed model. Generalized additive mixed model.