|Date: ||Wed, 21 Feb 2001 09:21:15 -0800|
|Reply-To: ||Dale McLerran <dmclerra@MY-DEJA.COM>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Dale McLerran <dmclerra@MY-DEJA.COM>|
|Subject: ||Re: multiple comparisons of not normally distributed data|
You raise what is one of the touchiest issues in statistics. Your
response to the reviewers comments must depend on at least a
couple of factors:
1) Is it reasonable to assume that your analyses are exploratory?
2) What is the journal standard for dealing with multiple endpoints?
Given multiple endpoints, my guess would be that most, if not
all, of the response variables are not primary endpoint measures.
If they are not primary endpoint measures, then the analyses
could be considered exploratory. Greater latitude is allowed for
exploratory analyses. Essentially, the argument is that we are
not really testing a hypothesis, but that we are looking for
testable hypotheses for future research. The outcomes which are
significant, or nearly so, in this investigation may become
primary endpoints of some future investigation. At that time,
a more rigorous standard may be required.
The second point about journal standards trumps the first argument.
If journal standards require adjustment for multiple tests,
then adjustment you must make. How do you know the journal
standards? Well, instructions to authors is a starting point.
Is there anything in the instructions to authors which indicates
that multiple test adjustment is required when testing multiple
endpoints? Yes? Then your choice is clear. You must make the
adjustments. No? Well, then you must look at previous journal
articles and see what has been done in the past. If you can
establish that the journal allows reporting of nominal
significance levels when the endpoints may be considered
exploratory, then you have some basis for holding out against
multiple test adjustment procedures.
If you believe that you must perform adjustments for multiple
comparisons, then I would check out the book "Multiple Comparisons
and Multiple Tests Using the SAS System" by Westfall, et. al.
By the way, I am curious as to how many observations you have
for your analyses. The reason I raise this question is that
it is very easy to reject the hypothesis of normality given
sufficient data. I believe that visual inspection of the
distribution of the residuals employing a histogram with a normal
density curve superimposed is usually preferable to strict
testing of normality assumptions. Many procedures are robust to
some departure from normality. It may be that you are applying
too strict a standard here. If you do have to perform adjustment
for multiple tests, then you have more options if you can assume
normality. I assume, too, that you have investigated possible
transformations to the response variables to improve normality.
>Date: Wed, 21 Feb 2001 11:25:53 +0100
>Reply-To: "Christian F.G. Schendera" <schendera@NIKOCITY.DE>
>From: "Christian F.G. Schendera" <schendera@NIKOCITY.DE>
>Subject: multiple comparisons of not normally distributed data
>Data situation: 3 independent groups, several continuous dependent/response
>vars. Continuous vars are not normal distributed (only a third of the data
>reach ShapiroWilks >0.1).
>Problem: Collegues compared the three groups pairwise with simple Mann
>Whitneys at alpha 0.05. Journal reviewers criticized this proceeding for not
>having used p-adjusting procedures like Bonferroni.
>Question: Are reviewers right? How could one apply p-adjusting procedures
>when conditions for ANOVA/GLM are not met? Could MULTTEST be used to perform
>multiple comparisons on the described data? Or adjust the p in the pairwise
>comparisons? Whta would you recommend in this situation?
>Thanks in advance,
Fred Hutchinson Cancer Research Center
Ph: (206) 667-2926
Fax: (206) 667-5977
--== Sent via Deja.com ==--