This chapter explains the structure/steps of hypothesis testing, the concept of significance, the relationship between confidence intervals and hypothesis testing, and Type I/II errors.
This chapter explains the structure/steps of hypothesis testing, the concept of significance, the relationship between confidence intervals and hypothesis testing, and Type I/II errors.
This text explains the differences between t-tests, z-tests, tests with proportions, and tests of correlation.
Analysis of variance (ANOVA) is used to test hypotheses about differences between two or more means. The t-test based on the standard error of the difference between two means can only be used to test differences between two means. When there are more than two means, it is possible to compare each mean with each other mean using t-tests. However, conducting multiple t-tests can lead to severe inflation of the Type I error rate. (Click here to see why) Analysis of variance can be used to test differences among several means for significance without increasing the Type I error rate. This chapter covers designs with between-subject variables.
When an experimenter is interested in the effects of two or more independent variables, it is usually more efficient to manipulate these variables in one experiment than to run a separate experiment for each variable. Moreover, only in experiments with more than one independent variable is it possible to test for interactions among variables. Experimental designs in which every level of every variable is paired with every level of every other variable are called factorial designs.
Within-subject designs are designs in which one or more of the independent variables are within-subject variables. Within-subjects designs are often called repeated-measures designs since within-subjects variables always involve taking repeated measurements from each subject. Within-subject designs are extremely common in psychological and biomedical research.
When two variables are related, it is possible to predict a person's score on one variable from their score on the second variable with better than chance accuracy. This section describes how these predictions are made and what can be learned about the relationship between the variables by developing a prediction equation.
This chapter discusses a collection of tests called distribution-free tests, or nonparametric tests, that do not make any assumptions about the distribution from which the numbers were sampled. The main advantage of distribution-free tests is that they provide more power than traditional tests when the samples are from highly-skewed distributions.
Measures of the size of an effect based on the degree of overlap between groups usually involve calculating the proportion of the variance that can be explained by differences between groups. This resource outlines different approaches to measuring this proportion.
Online Statistics: An Interactive Multimedia Course of Study is a resource for learning and teaching introductory statistics. It contains material presented in textbook format and as video presentations. This resource features interactive demonstrations and simulations, case studies, and an analysis lab.
A collection of Java applets and simulations covering a range of topics (descriptive statistics, confidence intervals, regression, effect size, ANOVA, etc.).