--- anne olean <firstname.lastname@example.org> wrote:
> --- Dale McLerran <stringplayer_2@YAHOO.COM> wrote:
> > Note that when you fit GLIMMIX, you cannot use the
> > AIC and BIC
> > statistics. The likelihood reported by PROC MIXED
> > is not the
> > correct likelihood model. Moreover, GLIMMIX
> > an updated
> > response variable with each iteration. That means
> > that the
> > model you fit determines the response variable for
> > which GLIMMIX
> > reports likelihoods. Now, you can only compare
> > likelihoods if
> > you have the same response variable in all your
> > models. Since
> > the model determines the (PROC MIXED) response
> > variable, you
> > cannot use any of the likelihood-based statistics
> > reported by
> > PROC MIXED for model comparison.
> Is there a reference where I may read up on the how
> do model comparison when using glimmix? I searched
> online but didn't find anything. What in the output
> from GLIMMIX can I use to evaluate the fit if not
> AIC/BIC etc?
I should have added in addition to the Glimmix model
statistics that list deviance, scaled deviance and
extra-dispersion scale. If the extra-dispersion scale
is about .8, do I have to address the
underdistpersion, or is this tolerable?
> Given that I have a count outcome (ranging from 0 to
> 7), would it be wrong to use proc mixed? I
> that it assumes a continuous outcome, but how robust
> is proc mixed to this violation?
> thanks, ako
> Do you Yahoo!?
> Read only the mail you want - Yahoo! Mail SpamGuard.
Do you Yahoo!?
Yahoo! Mail is new and improved - Check it out!