The validity or capacity of scientific or medical studies to generalize is often put at risk through the introduction of bias. Such bias results from systematic, nonrandom effects that, even in a large study, produce an incorrect answer or result by weakening, distorting, or spuriously creating a relation between a risk factor or intervention and the observed outcome. It might be caused by a reference population different from the intended group. Therefore, bias has the potential to jeopardize study validity. Researchers must recognize this potential, and reduce its effects through study design, analysis, and interpretation. Controlled laboratory experiments and randomized clinical trials are less prone to bias than are observational studies such as cohort or case-control studies, but this protection is only available for a limited set of conclusions, and bias must be addressed in all studies.
There are many types of bias, which can be intentional or unintentional, and events or features that bias one study may have no biasing effect on another. Biases can result from selection effects (e.g., the sampling plan leaves out a sub-group, over represents a subgroup, or has more complete follow-up for a subgroup [the healthy worker effect]); differential measurement (e.g., cancer cases provide a more accurate family history or exposure history than do controls), measurement error (e.g., the recorded and actual exposures to cigarette smoke differ), and a host of other factors.
Bias is a loaded term in that not all bias is bad. For example, in small studies use of a statistically biased estimate (an estimate that on average does not equal the population valve) can have substantially lower variance than the unbiased estimate and thus be preferred. Regression techniques rely on this trade-off between variance and bias to decide on the valve of entering additional explanatory variables.
Additional examples of bias include the following:
- Conscious selection: A randomized clinical trial requires participants to have the disease of interest, but not be too ill. The treatment comparison is internally valid, but generalizing findings to all diseased individuals may introduce a bias.
- Regression dilution: Reducing elevated blood pressure is known to reduce the risk of a myocardial infarction. However, blood pressure is measured with error, and regression dilution produces an attenuated (biased) relation between the intervention and risk.
- Drop out bias: For an interesting example of bias consider a study of the effects of coaching on SAT scores, reporting that students completing the coaching program averaged a fifty-point-higher score on their next SAT exam than those who dropped out. This result is unbiased in comparing completers with noncompleters; however, the result is positively biased in assessing the effect of a coaching program on all who start the program, irrespective of whether they complete it.
Other types of bias typically encountered in epidemiologic research, particularly those employing observational designs, include recall and observer bias. Recall bias arises if one group systematically over- or underreports information about an exposure or risk factor in comparison to the other group. Observer bias occurs if one group is systematically "observed" and reported to behave in a manner that is different from the other group.
Careful design and conduct of studies and careful interpretation of results are necessary to reduce or eliminate bias. Minimizing bias in design and conduct is preferable to relying on post hoc statistical "cures" such as covariance of adjustment and causal modeling. These powerful techniques are absolutely necessary in analyzing observational studies and can be used to "mop up" some bias in designed experiments, but their effectiveness depends on model validity and expert tuning to the specific study.
GERMAINE M. BUCK
THOMAS A. LOUIS
Last, J. M. (1998). A Dictionary of Epidemiology, 2nd edition. New York: Oxford University Press.
Rothman, K. J., and Greenland, S. (1998). Modern Epidemiology, 2nd edition. Philadelphia, PA: Lippincott-Raven.