So it seems like the proper way to get an estimate of the confidence
intervals from the simulation is to compute the CI for each simulation and
then average those values. So I'd need to recover the upper and lower
bound of the CI from ODS if it exists or compute it for each simulation.
Then I'd have nsim lines of data (where nsim is the number of simulations)
of samples with size n:
obs beta beta95lower beta95upper
where the upper and lower bounds are beta+/-1.96*SD/sqrt(n) for a 95% CI.
Then it's just computing the means of these values (sum of beta/nsim,
etc.) to obtain the CI of the coefficient as estimated by the simulation,
On Tue, 10 Jan 2012 08:19:17 -0500, William Shakespeare
>A few conceptual questions about simulation and power in the context of
>glm: The idea is to generate data as if it were a random draw from some
>prespecified distribution, say N(10,2), right? I run a model and test the
>model coefficients at some level, e.g., .05. If I do this many times and
>count the number of times a coefficient is significant and divide that by
>my total number of simulations, that is my power, correct?
>What if I want to obtain confidence intervals as well? Could I not save
>the actual value of a coefficient and then compute a mean and standard
>deveiation for that estimate? For a 95% CI can I use Mean+/-1.96*SD/sqrt
>(n) where n is the number of simulations or do I have to take into account
>the number of variables in the model?