Getting back to basics:
Presume a totally fair coin. Then, as N grow, the probability of
getting EXACTLY N/2 heads shrinks. But the probability of getting
APPROXIMATELY N/2 heads grows. The chance of getting a statistically
significant result with a totally fair coin is .05 (or whatever value is
chosen) regardless of N; but the difference between .5 and the
proportion of heads which will give a significant result shrinks as N
Thia is the basic reason why the reification of siginficance testing is
a bad idea, at least in most cases. The p value tests whether these
results are likely to have happened by chance, given that the null
hypothesis is true (e.g., the coin is fair). But what we are usually
interested in is whether it's likely that the null hypothesis is true,
given these results. The two are not equivalent. We are also usually
interested in effect size, not just statistical significance. If I
test a diet (say) on 100,000 people, then what is interesting to people
who might follow the diet is NOT whether the average weight loss is
significant (it is very likely to be significant) but how large it is.
Peter L. Flom, PhD
Assistant Director, Statistics and Data Analysis Core
Center for Drug Use and HIV Research
National Development and Research Institutes
71 W. 23rd St
New York, NY 10010
(212) 845-4485 (voice)
(917) 438-0894 (fax)
>>> Bill Anderson <wnilesanderson@COX.NET> 01/13/03 03:10PM >>>
Actually, if a 'fair' coin is flipped 1,000,000 times, the probability
rejecting the null hypothesis is still 0.05 (or whatever alpha is
We know that there are physical differences between the head and tail
coin, and it is quite believable that no coin is perfectly fair. So if
flip a LOT of times, we figure to reject the null hypothesis of
This is not due to an error in statistics; rather it is a reflection of
lack of fairness in the coin.
Probably the simplest way to handle this is using the concept of
equivalence. Decide in advance what amount of difference really
and use the null hypothesis that the difference is this big or bigger.
larger sample sizes will get you to the truth: if the difference does
matter, then large sample sizes will reject the null hypothesis, and
will correctly conclude equivalence. It may or may not happen that at
same time you have a statistically significant difference, but the
situation is simply unimportant.
There is a lot of journal literature on the subject of equivalence,
it is still slow to get into elementary textbooks.
----- Original Message -----
From: "Bross, Dean S" <dean.bross@HQ.MED.VA.GOV>
Sent: Monday, January 13, 2003 8:14 AM
Subject: Re: Interpretation of Small Effect with Large N
> Some people sum up this finding as proving what seems to be
> one of the untaught laws of nature:
> All null hypotheses are false.
> I consider this to be just like one of the laws of thermodynamics.
> It is not an error in statistical methods.
> -----Original Message-----
> From: Tim Berryhill [mailto:tim@AARTWOLF.COM]
> Sent: Saturday, January 11, 2003 11:34 AM
> To: SAS-L@LISTSERV.UGA.EDU
> Subject: Re: Interpretation of Small Effect with Large N
> Would someone mind expanding on this? I usually use SAS for
> business data processing, but back when I worked reasearch I noticed
> the sample size was large then the differences were ALWAYS
> significant. On the flip side, I know that if one counts heads and
> for 1,000,000 flips of a balanced coin, the odds of getting exactly
> heads are quite low.
> Is there a mistake in choice of statistics which crops up with large
> sizes? Is it a matter of violated assumptions which only shows up
> have large N?
> Just curious (in case I try to cure cancer),
> Tim Berryhill
> "Paul Thompson" <email@example.com> wrote in message
> > Just guessing here, but I bet you have boupcoup participants, n'est
> > Many many?
> > Thompson Bill T Contr USAFSAM/FEC wrote:
> > > Can someone please explain to me or point in the right direction
> > > me understand how to interpret the results of a repeated
> > > where you have a small effect (.20) with strong power (.943).
> > >
> > > Thanks in advance,
> > >
> > > Bill