|Date: ||Wed, 25 Apr 2007 12:46:46 -0400|
|Reply-To: ||Steve Denham <steven.c.denham@MONSANTO.COM>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Steve Denham <steven.c.denham@MONSANTO.COM>|
|Subject: ||Re: Computing power post-hoc|
On Wed, 25 Apr 2007 08:56:54 -0700, Pardee, Roy <pardee.r@GHC.ORG> wrote:
>Non-statistician question here...
>Can you answer the question "how big would my effect size had to have
>been in order for it to be statistically distinguishable from zero?" w/a
>post-hoc power analysis? I would think that could be useful to know...
That kind of calculation is easy enough to do, and can be fairly
enlightening in the sense that it tells you something about the experiment
that you just finished up. I wouldn't see any sense in calculating
probabilities (i.e., a "power analysis") for all the reasons we've already
thrown around. In addition, I would worry about generalizing
that "significant effect size" to future experiments, because it depends
not only on the actualization of your variance estimate, but on the
available sample size.
Now if you express effect size as multiples of MSE, then we avoid the
sample size dependency. But then we already know that an effect size equal
to twice sqrt(2/n) times the MSE is going to be significant.
So, after all the blah-blah-blah, it comes down to how precisely we
estimate the residual error.
And that depends on how closely the assumptions of whatever analysis is
used are being met. Dagnab it, this is NOT a non-statistician friendly
So friendly type answer:
Yes it can be done.
No probability calculation needed.
I don't trust the result as an inferential tool.