|Date: ||Fri, 29 Jun 2001 14:27:58 -0400|
|Reply-To: ||Peter Flom <peter.flom@NDRI.ORG>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Peter Flom <peter.flom@NDRI.ORG>|
|Subject: ||Re: test of skewness and Kurtosis|
|Content-Type: ||text/plain; charset=US-ASCII|
Regarding the visual approach, I absolutely agree. QQ plots can also be very useful. But, if one does not have SAS-Graph or some other package to do this in, then it is difficult. Personally, I display distributions in S-Plus. Also, while the graphical approach is, in many respects, the best, it is hard to report on, especially to (say) journal editors. "JUST LOOK AT IT!!" is not acceptable (more's the pity).
This came up for me in my dissertation, I had data which was shown, by KS to be NONnormal. I used some sort of language like "none of the distributions were grossly nonnormal, all were unimodal, and none had absolute skewness above XXX or kurtosis above XXXX".
Regarding the other tests, thanks for the info! This is, to me, a fascinating area.
>>> "Dale McLerran" <email@example.com> 06/29/01 02:16PM >>>
You have a point. However, rather than looking at tests of skewness
and kurtosis in this situation, I would prefer the visual inspection approach of plotting a histogram of the data with a normal density
curve superimposed on it. You can demonstrate right from the figure
what problems the distribution has with regard to assumptions about
normality. You can also use this as a basis for selecting a
transformation to the data to achieve approximate normality.
Note, though, that there are a number of tests of normality which are
not as sensitive to large sample effects as the tests reported by PROC
UNIVARIATE. A couple of tests that I am aware of are actually
constructed to test whether a distribution is Uniform(0,1). These
tests then proceed by first standardizing the data to have mean 0 and
variance 1 (z-statistics) and then computing the probability values
under the normal distribution of the z-statistics. If normality
holds, then the probability values will be distributed Uniform(0,1).
Now, one test of the Uniform distribution which would not be as
sensitive to sample size as a K-S test would be to count the number
of observations in intervals of length 0.05 or 0.10. We know that if
the Uniform(0,1) distribution holds, then the number of observations
in each interval should be equal. We can perform a chi-square test
of this assumption.
Another test I have recently become aware of is the Birnbaum test.
As with the chi-square test above, we use the probability values
which arise from the normal distribution for the z-statistics.
The mean probability value is then used to compute the Birnbaum test
statistic. Call the mean of the probability values Pbar. Then the
test statistic is
birnbaum = sqrt(N) * (0.5 - Pbar)
The asymptotic variance estimate of the Birnbaum statistic is 1/12.
For large N, we can test whether the p-values are Uniform(0,1) by
comparing birnbaum/sqrt(12) to tables of the standard normal dist.
Note that we are only testing whether the mean p-value differs from
0.5. If the data are symmetric but severely kurtotic, then the
Birnbaum statistic would be inappropriate. A chi-square test as
presented above would be preferrable.