Type I and type II errors are instrumental for the understanding of hypothesis testing in a clinical research scenario. A type I error is when a researcher rejects the null hypothesis that is actually true in reality. In other words, a type I error is a false positive or the conclusion that a treatment does have an effect, when in reality it does not. An example could be a study that examines a drug’s effectiveness on lowering cholesterol. The data may show that the drug works and thus, lowers cholesterol, when in fact it really does not work.
A type II error can be thought of as the opposite of a type I error and is when a researcher fails to reject the null hypothesis that is actually false in reality. Said differently, this means that we are concluding that a treatment effect does not exist, when in reality it does. Going back to our example above, we are stating that the drug does not work when in fact it really does. In both scenarios, the data are misleading.
When planning or evaluating a study, it is important to understand that we simply can only take measures to try to mitigate the risk of both errors. We really only have direct control over a type I error, which can be determined by the researcher before the study begins. This determination is known as “alpha” and the general consensus in scientific literature is to use an alpha level at 0.05. Type II errors are related to a number of other factors and therefore there is no direct way of assessing or controlling for a type II error. Nonetheless, they are both equally important.
Justin L. Gregg, MA, is a clinical research specialist for TriHealth Hatton Research Institute for Research and Education in Cincinnati, OH.