Book:Stephen Bernstein/Elements of Statistics II: Inferential Statistics
Jump to navigation
Jump to search
Stephen Bernstein and Ruth Bernstein: Elements of Statistics II: Inferential Statistics
Published $\text {1999}$, Schaum's Outlines
- ISBN 0-07-134637-6
Subject Matter
Contents
- CHAPTER 11: DISCRETE PROBABILITY DISTRIBUTIONS
- 11.1 Discrete Probability Distributions and Probability Mass Functions
- 11.2 Bernoulli Experiments and trials
- 11.3 Binomial Random Variables, Experiments, and Probability Functions
- 11.4 The Binomial Coefficient
- 11.5 The Binomial Probability Function
- 11.6 Mean, Variance, and Standard Deviation of the Binomial Probability Distribution
- 11.7 The Binomial Expansion and the Binomial Theorem
- 11.8 Pascal's Triangle and the Binomial Coefficient
- 11.9 The Family of Binomial Distributions
- 11.10 The Cumulative Binomial Probability Table
- 11.11 Lot-Acceptance Sampling
- 11.12 Consumer's Risk and Producer's Risk
- 11.13 Multivariate Probability Distributions and Joint Probability Distributions
- 11.14 The Multinomial Experiment
- 11.15 The Multinomial Coefficient
- 11.16 The Multinomial Probability Function
- 11.17 The Family of Multinomial Probability Distributions
- 11.18 The Means of the Multinomial Probability Distribution
- 11.19 The Multinomial Expansion and the Multinomial Theorem
- 11.20 The Hypergeometric Experiment
- 11.21 The Hypergeometric Probability Function
- 11.22 The Family of Hypergeometric Probability Distributions
- 11.23 The Mean, Variance, and Standard Deviation of the Hypergeometric Probability Distribution
- 11.24 The Generalization of the Hypergeometric Probability Distribution
- 11.25 The Binomial and Multinomial Approximations to the Hypergeometric Distribution
- 11.26 Poisson Processes, Random Variables, and Experiments
- 11.27 The Poisson Probability Function
- 11.28 The Family of Poisson Probability Distributions
- 11.29 The Mean, Variance, and Standard Deviation of the Poisson Probability Distribution
- 11.30 The Cumulative Poisson Probability Table
- 11.31 The Poisson Distribution as an Approximation to the Binomial Distribution
- CHAPTER 12 The Normal Distribution and Other Continuous Probability Distributions
- 12.1 Continuous Probability Distributions
- 12.2 The Normal Probability Distributions and the Normal Probability Density Function
- 12.3 The Family of Normal Probability Distributions
- 12.4 The Normal Distribution: Relationship between the Mean $(\mu)$, Median $(\overline \mu)$, and the Mode
- 12.5 Kurtosis
- 12.6 The Standard Normal Distribution
- 12.7 Relationship Between the Standard Normal Distribution and the Standard Normal Variable
- 12.8 Table of Areas in the Standard Normal Distribution
- 12.9 Finding Probabilities Within any Normal Distribution by Applying the Z Transformation
- 12.10 One-tailed Probabilities
- 12.11 Two-tailed Probabilities
- 12.12 The Normal Approximation to the Binomial Distribution
- 12.13 The Normal Approximation to the Poisson Distribution
- 12.14 The Discrete Uniform Probability Distribution
- 12.15 The Continuous Uniform Probability Distribution
- 12.16 The Exponential Probability Distribution
- 12.17 Relationship between the Exponential Distribution and the Poisson Distribution
- CHAPTER 13: SAMPLING DISTRIBUTIONS
- 13.1 Simple Random Sampling Revisited
- 13.2 Independent Random Variables
- 13.3 Mathematical and Nonmathematical Definitions of Simple Random Sampling
- 13.4 Assumptions of the Sampling Technique
- 13.5 The Random Variable $\overline X$
- 13.6 Theoretical and Empirical Sampling Distributions of the Mean
- 13.7 The Mean of the Sampling Distribution of the Mean
- 13.8 The Accuracy of an Estimator
- 13.9 The Variance of the Sampling Distribution of the Mean: Infinite Population or Sampling with Replacement
- 13.10 The Variance of the Sampling Distribution of the Mean: Finite Population Sampled without Replacement
- 13.11 The Standard Error of the Mean
- 13.12 The Precision of An Estimator
- 13.13 Determining Probabilities with a Discrete Sampling Distribution of the Mean
- 13.14 Determining Probabilities with a Normally Distributed Sampling Distribution of the Mean
- 13.15 The Central Limit Theorem: Sampling from a Finite Population with Replacement
- 13.16 The Central Limit Theorem: Sampling from an Infinite Population
- 13.17 The Central Limit Theorem: Sampling from a Finite Population without Replacement
- 13.18 How Large is "Sufficiently Large"?
- 13.19 The Sampling Distribution of the Sample Sum
- 13.20 Applying the Central Limit Theorem to the Sampling Distribution of the Sample Sum
- 13.21 Sampling from a Binomial Population
- 13.22 Sampling Distribution of the Number of Successes
- 13.23 Sampling Distribution of the Proportion
- 13.24 Applying the Central Limit Theorem to the Sampling Distribution of the Number of Successes
- 13.25 Applying the Central Limit Theorem to the Sampling Distribution of the Proportion
- 13.26 Determining Probabilities with a Normal Approximation to the Sampling Distribution of the Proportion
- CHAPTER 14 ONE-SAMPLE ESTIMATION OF THE POPULATION MEAN
- 14.1 Estimation
- 14.2 Criteria for Selecting the Optimal Estimator
- 14.3 The Estimated Standard Error of the Mean $S_{\overline x}$
- 14.4 Point Estimates
- 14.5 Reporting and Evaluating the Point Estimate
- 14.6 Relationship between Point Estimates and Interval Estimates
- 14.7 Deriving $P \left({\overline x_{1 - \alpha/2} \le \overline X \le \overline x_{\alpha/2}}\right) = P \left({-z_{\alpha/2} \le Z \le z_{\alpha/2}}\right) = 1 - \alpha$
- 14.8 Deriving $P \left({X - z_{\alpha/2} \sigma_{\overline x} \le \mu \le \overline X + z_{\alpha/2} \sigma_{\overline x}}\right) = 1 - \alpha$
- 14.9 Confidence Interval for the Population Mean $\mu$: Known Standard Deviation $\sigma$, Normally Distributed Population
- 14.10 Presenting Confidence Limits
- 14.11 Precision of the Confidence Interval
- 14.12 Determining Sample Size when the Standard Deviation is Known
- 14.13 Confidence Interval for the Population Mean $\mu$: Known Standard Deviation $\sigma$, Large Sample $(n \ge 30)$ from any Population Distribution
- 14.14 Determining Confidence Intervals for the Population Mean $\mu$ when the Population Standard Deviation $\sigma$ is Unknown
- 14.15 The $t$ Distribution
- 14.16 Relationship between the $t$ Distribution and the Standard Normal Distribution
- 14.17 Degrees of Freedom
- 14.18 The Term "Student's $t$ Distribution"
- 14.19 Critical Values of the $t$ Distribution
- 14.20 Table A.6: Critical Values of the $t$ Distribution
- 14.21 Confidence Interval for the Population Mean $\mu$: Standard Deviation $\sigma$ not known, Small Sample $(n < 30)$ from a Normally Distributed Population
- 14.22 Determining Sample Size: Unknown Standard Deviation, Small Sample from a Normally Distributed Population
- 14.23 Confidence Interval for the Population Mean $\mu$: Standard Deviation $\sigma$ not known, large sample $(n \ge 30)$ from a Normally Distributed Population
- 14.24 Confidence Interval for the Population Mean $\mu$: Standard Deviation $\sigma$ not known, large sample $(n \ge 30)$ from a Population that is not Normally Distributed
- 14.25 Confidence Interval for the Population Mean $\mu$: Standard Deviation $\sigma$ not known, Small Sample $(n < 30)$ from a Population that is not Normally Distributed
- CHAPTER 15 ONE-SAMPLE ESTIMATION OF THE POPULATION VARIANCE, STANDARD DEVIATION, AND PROPORTION
- 15.1 Optimal Estimators of Variance, Standard Deviation, and Proportion
- 15.2 The Chi-square Statistic and the Chi-square Distribution
- 15.3 Critical Values of the Chi-square Distribution
- 15.4 Table A.7: Critical Values qf the Chi-square Distribution
- 15.5 Deriving the Confidence Interval for the Variance $\sigma^2$ of a Normally Distributed Population
- 15.6 Presenting Confidence Limits
- 15.7 Precision of the Confidence Interval for the Variance
- 15.8 Determining Samble Size Necessary to Achieve a Desired Quality-of-Estimate for the Variance
- 15.9 Using Normal-Approximation Techniques To Determine Confidence Intervals for the Variance
- 15.10 Using the Sampling Distribution of the Sample Variance to Approximate a Confidence Interval for the Population Variance
- 15.11 Confidence Interval for the Standard Deviation $\sigma$ of a Normally Distributed Population
- 15.12 Using the Sampling Distribution of the Sample Standard Deviation to Approximate a Confidence Interval for the Population Standard Deviation
- 15.13 The Optimal Estimator for the Proportion $p$ of a Binomial Population
- 15.14 Deriving the Approximate Confidence Interval for the Proportion $p$ of a Binomial Population
- 15.15 Estimating the Parameter $p$
- 15.16 Deciding when $n$ is "Sufficiently Large", $p$ not known
- 15.17 Approximate Confidence Intervals fof the Binomial Parameter $p$ When Sampling From a Finite Population without Replacement
- 15.18 The Exact Confidence Interval for the Binomial Parameter $p$
- 15.19 Precision of the Approximate Confidence-Interval Estimate of the Binomial Parameter $p$
- 15.20 Determining Sample Size for the Confidence Interval of the Binomial Parameter $p$
- 15.21 Approximate Confidence Interval for the Percentage of a Binomial Population
- 15.22 Approximate Confidence Internal for the Total Number in a Category of a Binomial Population
- 15.23 The Capture-Recapture Method for Estimating Population Size $N$
- CHAPTER 16 ONE-SAMPLE HYPOTHESIS TESTING
- 16.1 Statistical Hypothesis Testing
- 16.2 The Null Hypothesis and the Alternative Hypothesis
- 16.3 Testing the Null Hypothesis
- 16.4 Two-Sided Versus One-Sided Hypothesis Tests
- 16.5 Testing Hypotheses about the Population Mean $\mu$: Known Standard Deviation $\sigma$, Normally Distributed Population
- 16.6 The $P$ Value
- 16.7 Type I Error versus Type II Error
- 16.8 Critical Values and Critical Regions
- 16.9 The Level of Significance
- 16.10 Decision Rules for Statistical Hypothesis Tests
- 16.11 Selecting Statistical Hypotheses
- 16.12 The Probability of a Type II Error
- 16.13 Consumer's Risk and Producer's Risk
- 16.14 Why It is Not Possible to Prove the Null Hypothesis
- 16.15 Classical Inference Versus Bayesian Inference
- 16.16 Procedure for Testily the Null Hypothesis
- 16.17 Hypothesis Testing Using $\overline X$ as the Test Statistic
- 16.18 The Power of a Test, Operating Characteristic Curves, and Power Curves
- 16.19 Testing Hypothesis about the Population Mean $\mu$: Standard Deviation $\sigma$ Not Known, Small Sample $(n < 30)$ from a Normally Distributed Population
- 16.20 The $P$ Value for the $t$ Statistic
- 16.21 Decision Rules for Hypothesis Tests with the $t$ Statistic
- 16.22 $\beta$, $1 - \beta$, Power Curves, and $OC$ Curves
- 16.23 Testing Hypotheses about the Population Mean $\mu$: Large Sample $(n \ge 30)$ from any Population Distribution
- 16.24 Assumptions of One-Sample Parametric Hypothesis Testing
- 16.25 When the Assumptions are Violated
- 16.26 Testing Hypothesis about the Variance $\sigma^2$ of a Normally Distributed Population
- 16.27 Testing Hypotheses about the Standard Deviation $\sigma$ of a Normally Distributed Population
- 16.28 Testing Hypotheses about the Proportion $p$ of a Binomial Population: Large Samples
- 16.29 Testing Hypotheses about the Proportion $p$ of a Binomial Population: Small Samples
- CHAPTER 17 TWO-SAMPLE ESTIMATION AND HYPOTHESIS TESTING
- 17.1 Independent Samples Versus Paired Samples
- 17.2 The Optimal Estimator of the Difference Between Two Population Means $(\mu_1 - \mu_2)$
- 17.3 The Theoretical Sampling Distribution of the Difference Between Two Means
- 17.4 Confidence Interval for the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations ($\sigma_1$ and $\sigma_2$) Known, Independent Samples from Normally Distributed Populations
- 17.5 Testing Hypotheses about the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations ($\sigma_1$ and $\sigma_2$) known, Independent Samples from Normally Distributed Populations
- 17.6 The Estimated Standard Error of the Difference Between Two Means
- 17.7 Confidence Interval for the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations not known but Assumed Equal ($\sigma_1 = \sigma_2$), Small ($n_1 < 30$ and $n_2 < 30$) Independent Samples from Normally Distributed Populations
- 17.8 Testing Hypotheses about the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations not Known but Assumed Equal ($\sigma_1 = \sigma_2$), Small ($n_1 < 30$ and $n_2 < 30$) Independent Samples from Normally Distributed Populations
- 17.9 Confidence Interval for the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations ($\sigma_1$ and $\sigma_2$) not Known, Large ($n_1 \ge 30$ and $n_2 \ge 30$) Independent Samples from any Population Distributions
- 17.10 Testing Hypotheses about the Difference Between Means $(\mu_1 - \mu_2)$: Standard Deviations ($\sigma_1$ and $\sigma_2$), not known, Large ($n_1 \ge 30$ and $n_2 \ge 30$) Independent Samples from any Populations Distributions
- 17.11 Confidence Interval for the Difference Between Means $(\mu_1 - \mu_2)$: Paired Samples
- 17.12 Testing Hypotheses about the Difference Between Means $(\mu_1 - \mu_2)$: Paired Samples
- 17.13 Assumptions of Two-sample Parametric Estimation and Hypothesis Testing about Means
- 17.14 When the Assumptions are Violated
- 17.15 Comparing Independent-Sampling and Paired-Sampling Techniques on Precision and Power
- 17.16 The $F$ Statistic
- 17.17 The $F$ Distribution
- 17.18 Critical Values of the $F$ Distribution
- 17.19 Table A.8: Critical Values of the $F$ Distribution
- 17.20 Confidence Interval for the Ratio of Variances $\left({\sigma_1^2 / \sigma_2^2}\right)$: Parameters ($\sigma_1^2, \sigma_1, \mu_1$ and $\sigma_2^2, \sigma_2, \mu_2$) Not Known, Independent Samples From Normally Distributed Populations
- 17.21 Testing Hypotheses about the Ratio of Variances $\left({\sigma_1^2 / \sigma_2^2}\right)$: Parameters ($\sigma_1^2, \sigma_1, \mu_1$ and $\sigma_2^2, \sigma_2, \mu_2$) not known, Independent Samples from Normally Distributed Populations
- 17.22 When to Test for Homogeneity of Variance
- 17.23 The Optimal Estimator of the Difference Between Proportions $(p_1 - p_2)$: Large Independent Samples
- 17.24 The Theoretical Sampling Distribution of the Difference Between Two Proportions
- 17.25 Approximate Confidence Interval for the Difference Between Proportions from Two Binomial Populations $(p_1 - p_2)$: Large Independent Samples
- 17.26 Testing Hypotheses about the Difference Between Proportions from Two Binomial Populations $(p_1 - p_2)$: Large Independent Samples
- CHAPTER 18 MULTISAMPLE ESTIMATION AND HYPOTHESIS TESTING
- 18.1 Multisample Inferences
- 18.2 The Analysis of Variance
- 18.3 ANOVA: One-Way, Two-Way, or Multiway
- 18.4 One-Way ANOVA: Fixed-Effects or Random Effects
- 18.5 One-way, Fixed-Effects ANOVA: The Assumptions
- 18.6 Equal-Samples, One-Way, Fixed-Effects ANOVA: $H_0$ and $H_1$
- 18.7 Equal-Samples, One-Way, Fixed-Effects ANOVA: Organizing the Data
- 18.8 Equal-Samples, One-Way, Fixed-Effects ANOVA: the Basic Rationale
- 18.9 $SST = SSA + SSW$
- 18.10 Computational Formulas for $SST$ and $SSA$
- 18.11 Degrees of Freedom and Mean Squares
- 18.12 The $F$ Test
- 18.13 The ANOVA Table
- 18.14 Multiple Comparison Tests
- 18.15 Duncan's Multiple-Range Test
- 18.16 Confidence-Interval Calculations Following Multiple Comparisons
- 18.17 Testing for Homogeneity of Variance
- 18.18 One-Way, Fixed-Effects ANOVA: Equal or Unequal Sample Sizes
- 18.19 General-Procedure, One-Way, Fixed-effects ANOVA: Organising the Data
- 18.20 General-Procedure, One-Way, Fixed-effects ANOVA: Sum of Squares
- 18.21 General-Procedure, One-Way, Fixed-Erects ANOVA Degrees of Freedom and Mean Squares
- 18.22 General-procedure, One-Way, Fixed-Effects ANOVA: the $F$ Test
- 18.23 General-procedure, One-Way, Fixed-Erects ANOVA: Multiple Comparisons
- 18.24 General-procedure, One-Way, Fixed-Effects ANOVA: Calculating Confidence Intervals and Testing for Homogeneity of Variance
- 18.25 Violations of ANOVA Assumptions
- CHAPTER 19 REGRESSION AND CORRELATION
- 19.1 Analyzing the Relationship between Two Variables
- 19.2 The Simple Linear Regression Model
- 19.3 The Least-Squares Regression Line
- 19.4 The Estimator of the Variance $\sigma^2_{Y \cdot X}$
- 19.5 Mean and Variance of the $y$ Intercept $\hat a$ and the Slope $\hat b$
- 19.6 Confidence Intervals for the $y$ Intercept $a$ and the Slope $b$
- 19.7 Confidence Interval for the Variance $\sigma^2_{Y \cdot X}$
- 19.8 Prediction Intervals for Expected Values of $Y$
- 19.9 Testing Hypotheses about the Slope $b$
- 19.10 Comparing Simple Linear Regression Equations from Two or More Samples
- 19.11 Multiple Linear Regression
- 19.12 Simple Linear Correlation
- 19.13 Derivation of the Correlation Coefficient $r$
- 19.14 Confidence Intervals for the Population Correlation Coefficient $\rho$
- 19.15 Using the $r$ Distribution to Test Hypotheses about the Population Correlation coefficient $\rho$
- 19.16 Using the $f$ Distribution to Test Hypotheses about $\rho$
- 19.17 Using the $Z$ Distribution to Test the Hypothesis $\rho = c$
- 19.18 Interpreting the Sample Correlation Coefficient $r$
- 19.19 Multiple Correlation and Partial Correlation
- CHAPTER 20 NONPARAMETRIC TECHNIQUES
- 20.1 Nonpnmmetric vs. Parametric Techniques
- 20.2 Chi-Square Tests
- 20.3 Chi-Square Test for Goodness-of-fit
- 20.4 Chi-Square Test for Independence: Contingency Table Analysis
- 20.5 Chi-Square Test for Homogeneity Among $k$ Binomial Proportions
- 20.6 Rank Order Tests
- 20.7 One-Sample Tests: The Wilcoxon Signed-Rank Test
- 20.8 Two-Sample Tests: the Wilcoxon Signed-Rank Test for Dependent Samples
- 20.9 Two-sample Tests: the Mann-Whitney $U$ Test for Independent Samples
- 20.10 Multisample Tests: the Kruskal-Wallis $H$ Test for $k$ Independent Samples
- 20.11 The Spearman Test of Rank Correlation
- Appendix
- Table A.3 Cumulative Binomial Probabilities
- Table A.4 Cumulative Poisson Probabilities
- Table A.5 Areas of the Standard Normal Distribution
- Table A.6 Critical Values of the $t$ Distribution
- Table A.7 Critical Values of the Chi-Square Distribution
- Table A.8 Critical Values of the $F$ Distribution
- Table A.9 Least Significant Studentized Ranges $r_p$
- Table A.10 Transformation of $r$ to $z_r$
- Table A.11 Critical Values of the Pearson Product-Moment Correlation Coefficient $r$
- Table A.12 Critical Values of the Wilcoxon $W$
- Table A.13 Critical Values of the Mann-Whitney $U$
- Table A.14 Critical Values of the Kruskal-Wallis $H$
- Table A.15 Critical Values of the Spearman $r_s$
- Index