Date: Fri, 19 Jan 2007 18:21:01 +0200
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Bora Yavuz <BoraYavuz@HSBC.COM.TR>
Subject: Re: Calibration tests
Content-type: text/plain; charset=US-ASCII
Sorry for the typo in the first paragraph:
<Non-overlapping lines (i.e., difference in *scores*...> should have been
<Non-overlapping lines (i.e., difference in *slopes*)...>.
Yavuz/HBTR/HSBC "SAS(r) Discussion"
01/19/2007 05:43 cc
Calibration tests(Document link:
The intuitive way of showing that the model does not perform well on some
(or all!) of the score brackets is to plot the actual log(odds) and the
expected log(odds) for each score bracket. Non-overlapping lines (i.e.,
difference in scores and / or the y-axis intercept values) will indicate
poor model performance.
An approach to adjust the expected probabilities to in order to better
reflect the actual ones is to regress the actual (observed) 0/1 vector with
the score the model produces. You can use logistic regression again.
Using the output of this regression, you can figure out how to change the
slope of the expected log(odds) curve to better mimic the actual log(odds)
curve. If you deem this is still not enough to attain a good match of the
two lines, you can further adjust the intercept too.
Note that this "calibration" does not affect the scorecard characteristics
-- it just "fudges" the output score produced by them. So, I doubt this
will improve the predictive performance (i.e., discriminatory power, Gini)
of the model -- it will rather make the predictions somewhat more
realistic. If you think your model's predictive performance degraded "a
lot", you should consider redevelopment.