LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (September 2011, week 3)Back to main SAS-L pageJoin or leave SAS-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:   Tue, 20 Sep 2011 07:02:16 -0400
Reply-To:   William Shakespeare <shakespeare_1040@HOTMAIL.COM>
Sender:   "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From:   William Shakespeare <shakespeare_1040@HOTMAIL.COM>
Subject:   Study design
Content-Type:   text/plain; charset=ISO-8859-1

Suppose there’s a situation where Treatment A and never Treatment B is always administered to a cohort, there then occurs a change in practice, and Treatment B and never Treatment A is administered to a different cohort. An investigator wishes to compare the incidence of a disease across these cohorts. What does this prove other than the incidences are similar/different? There are so many threats to internal validity there’s no way anyone can make any causal attribution for the differences. This is certainly the unspoken goal of such a study. Do any of you face this situation? What do you say? What is the standard design for such a situation? There is not really the possibility of a case-control study after the change of practice since there really is not a relevant control group because everyone gets the same treatment. Even if you could find appropriate controls a case-control study would be a poor choice because those designs are geared more toward elucidating the factors associated with disease rather than the effectiveness of treatment. Has anyone ever seen case-control studies used to evaluate treatments in a retrospective sense?

Back to: Top of message | Previous page | Main SAS-L page