|Title||Tutorial: Introduction to Multiplicity in Clinical Trials|
|Year of Publication||2014|
Many scientific experiments subject to rigorous statistical analyses involve the simultaneous evaluation of more than one question. Multiplicity therefore becomes an inherent problem with various unintended consequences. The most widely recognized result is that the findings of an experiment can be misleading: Seemingly significant effects occur more often than expected by chance alone and not compensating for multiplicity can have important real world consequences. For instance, when the multiple comparisons involve drug efficacy, they may result in approval of a drug as an improvement over existing drugs, when there is in fact no beneficial effect. On the other hand, when drug safety is involved, it could happen by chance that the new drug appears to be worse for some side effect, when it is actually not worse at all. By contrast, multiple comparison procedures adjust statistical inferences from an experiment for multiplicity. Multiple comparison procedures thus enable better decision making and prevent the experimenter from declaring an effect when there is none.
This tutorial will discuss general concepts and basic multiple comparisons procedures, including a review of the Bonferroni and Simes methods and their stepwise extensions. Moving forward to advanced multiple comparisons procedures, we introduce closed testing as a general construction principle. We show that many common multiple test procedures can be considered as closed test procedures. Finally, we briefly introduce recently developed graphical approaches to multiple testing. In this tutorial we focus on “classical” applications of multiple comparison procedures, where the number of comparisons is moderate and/or where strong evidence is needed, so the emphasis is on familywise error rate controlling methods.
Tutorial: Introduction to Multiplicity in Clinical Trials