TITLE: A fresh look at effect aliasing and interactions: some new wine in old bottles

ABSTRACT:

Interactions and effect aliasing are among the fundamental concepts in experimental design. Some new insight and approach are given on this time honored subject.  Start with the very simple two-level fractional factorial designs. Two interactions AB and CD are said to be aliased if both represent and are used to estimate the same effect. In the literature this aliasing is deemed impossible to be “de-aliased” or estimated. We argue that this “impossibility” can indeed be resolved by taking a new approach which consists of reparametrization using the notion of “conditional main effects” (cme’s) and model selection by exploiting the properties between the cme’s and traditional factorial effects. In some sense this is a shocking result as this has been taken for granted since the founding work of Finney (1945). There is a similar surprise for three-level fractional factorial designs. The standard approach is to use ANOVA to decompose the interactions into orthogonal components, each of 2-d. Then the quandary of full aliasing between interaction components remains. Again this can be resolved by using a non-orthogonal decomposition of the four degrees of freedom for AxB interaction using the linear-quadratic parametrization. Then a model search strategy would allow the estimation of some interaction components even for designs of resolution III and IV. Moving from regular to nonregular designs like the Plackett-Burman designs, most of the interactions are not orthogonal to the main effects. The partial aliasing of the effects and their complexity was traditionally viewed as “hazards”. Hamada and Wu (1992)  recognized that this could be turned into an advantage. Their analysis strategy for effect de-aliasing is a precursor to what was described above. Underlying the three problems is the use of reparametrization and exploitation of non-orthogonality among some effects. The stated approach can be extended beyond designed experiments and potential applications in machine learning will be outlined.

(This internal talk is a repeat of the Akaike Memorial Lecture I gave on September 5 in Kanazawa, Japan.)