Date: Thu, 7 Apr 2005 01:21:09 -0700
Reply-To: Oliver Kuss <Oliver.Kuss@MEDIZIN.UNI-HALLE.DE>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Oliver Kuss <Oliver.Kuss@MEDIZIN.UNI-HALLE.DE>
Subject: Re: Biased prob. estimates in random logistic regression?
Content-Type: text/plain; charset=ISO-8859-1
I think you are expecting too much of the model and the estimation
I ran your simulation (with your original code, that is, with the
random intercept) a hundred times and got a mean estimated p of 0.910
(Min: 0.872, Max: 0.942) for ItemType=0 and a mean estimated p of
0.178 (Min: 0.141, Max: 0.236) for ItemType=1. These are are very good
estimates for the data you have. Is the observed difference to the
true values of 0.9 and 0.2 really relevant?
Remember that you have only 30 observations to estimate the random
effect, very large fixed effects (separation might become an issue)
and a very complicated likelihood function (that can only be maximized
by numerical integration!), so in my opinion PROC NLMIXED is doing a
very good job here.
> Hmm, I may not have been very clear. It's the random effects that are
> causing the problem, and I can't leave them out because they're
> important to modeling several of our data sets. The two item types are
> kind of a red herring, in the sense that the model still gives
> biased-looking estimates with only one item type, when the intercept
> and random effect size are the only parms.
> My expectation for the simulation is that the estimate ought to be an
> unbiased estimate of the actual mean in the generated data set (not
> necessarily .2 or .9) -- that is, it's about equally likely to be too
> high and too low. Right now, the estimate is always too much toward
> the extreme probability (0 or 1). I might well be confused somewhere
> though, since this does seem like about the simplest possible use of
> NLMIXED with binary outcomes and random effects.