A normal distribution plays a prominent role in tests of hypothesis that involve the mean of a population. In particular, if a random sample of observations is normally distributed, statistical inferences for the sample mean can be made by constructing a Z-test statistic that follows a standard normal distribution. However, the use of this
where n is the total number of observations, x is the mean of the sampled observations, and xi is the value for the ith observation (i = 1,2,…n). When the population variance is estimated using sampled data, the use of the Z-test statistic to perform hypothesis testing can lead to biased results. A solution to this problem was put forth in 1908 by William Gossett, a statistician employed by an Irish brewery who went by the pseudonym "Student." Today, the "Student's t-distribution" is routinely used to perform tests of hypothesis.
The Student's t-distribution is not a unique distribution, but rather a family of distributions whose shape is symmetric and determined by the number of sampled observations, or equivalently, the number of degrees of freedom. Like other probability distributions, the total area under the curve of at-distribution is equal to one. The pvalues for tests of hypothesis based on this distribution can typically be extracted from the published tables that appear as an appendix in most statistical texts. As the number of degrees of freedom increases, the shape of the t-distribution converges to that of the standard normal distribution.
The t-distribution can be used to compare the mean of a sampled population to some fixed, known value. This statistical test of hypothesis is referred to as a "one-sample t-test." For example, a researcher might be interested in determining whether the average family income among Chicago residents was higher or lower that the average family income in the entire United States. Suppose that we knew that the average family income in the United States was $35,000 per year. The mean income from a random sample of Chicago residents could be calculated, and a one-sample t-test could be used to determine whether their mean income level was significantly different from the national average. This one sample t-test statistic is calculated as follows:
where x represents the mean of the sampled data, µ represents the hypothesized value of this mean, and n represents the total number of sampled measures. In the above example, µ would equal $35,000 while x would be the average income calculated using data supplied by n Chicago residents. This t-statistic has n−1 degrees of freedom.
In practice, the two-sample t-test is a more commonly used statistic. This statistic can evaluate whether or not there are significant differences in the means of two independently sampled populations. In addition to the assumption of independence, it is assumed that within each population the variable of interest is normally distributed with equal variances. The mathematical derivation of this test statistic is as follows:
where n1 and n2 are the number of observations in each of the two groups; x1 and x2 are the means of the two groups, and sp2 is an estimate of the pooled sample variance. The sample variance is calculated using the formula:
where s12 and s22 represent the sample variances in the two groups. The total number of degrees of freedom associated with this t-test is n1+n2-2.
To illustrate the application of this test statistic, consider the situation where an investigator would like to determine whether infant birthweight was significantly different between mothers who smoked during pregnancy and those who did not. Suppose that the mean birthweight among ten infants whose mothers smoked was 5 lbs., while the mean birthweight among the same number of infants whose mothers did not smoke was 8 lbs. If the pooled sample variance based on the weight measurements taken on these twenty infants was 3 lbs., then using the above formula, the calculated t-test statistic would be approximately 3.9 with 18 degrees of freedom. The two-sided p-value associated with this test is approximately 0.0006. In
When the variances in two groups being compared are not equal, the "modified t-test" should be used to compare the means. Instead of using a common pooled estimate of variance,
the variance for each group is used in the calculation of the t-test statistic. Specifically, where n1 and n2 are the number of observations in each of the two groups; x1 and x2 are the means of the two groups and s12 and s22represent the variances of the two groups. Because the exact distribution of the modified t-test statistic is difficult to derive, it is necessary to approximate the number of degrees of freedom using the following formula:
This value of d is rounded down to the nearest integer. Using the calculated modified t-test statistic and the estimated number of degrees of freedom, the p-value can then obtained from the appropriate t-distribution to determine whether the two means are significantly different from each other.
Paired data is frequently collected in studies of public health. Here, each observation in the first sample is matched to a unique data point in the second sample. In the technique of self-pairing, measurements are taken on a single subject, or entity, at two distinct points in time. One example of self-pairing is the before and after experiment where each individual is examined both before and after a certain treatment has been applied. Because the data are no longer independent, the two-sample t-test can no longer be used to test the before and after means. Instead, the paired t-test can be used to test the hypothesis that the mean difference of the pairs is equal to zero. This test is constructed by taking the mean difference of all observed pairs and dividing this by the standard error of all observed differences. The degrees of freedom associated with this test statistic is equal to the number of pairs less one.
Finally, the t-distribution plays a role in hypothesis tests using results obtained from multivariate regression analysis. Multivariate regression models are used to describe the association between an outcome variable and a series of independent variables. Computer programs have been developed to estimate the value of the model coefficients and their standard errors. Tests of significance about each independent variable can be performed by taking the ratio of these parameter estimates and their associated standard errors; this ratio follows a t-distribution. The number of degrees of freedom for these test statistics are determined by the number of observations and the number of independent variables in the fitted model.
PAUL J. VILLENEUVE