|Date: ||Thu, 15 Aug 2002 09:14:58 -0700|
|Reply-To: ||Andrew Sun <andrew70912@YAHOO.COM>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Andrew Sun <andrew70912@YAHOO.COM>|
|Subject: ||repeated measures ANOVA|
|Content-Type: ||text/plain; charset=ISO-8859-1|
My study design is a "Two-factor experiment with repeated measures on
one factor". In this study, there are three treatment groups, each
group has 9 subjects, and each subject has a baseline score and a
The data analysis I did is repeated measures ANOVA by running 'Proc
ANOVA' in SAS. Below is the SAS code I used:
PROC ANOVA DATA=TWOWAY;
TITLE 'Two-way ANOVA with TIME as a Repeated Measure';
CLASS SUBJ GROUP TIME;
MODEL SCORE = GROUP SUBJ(GROUP) TIME
TEST H=GROUP E=SUBJ(GROUP);
TEST H=TIME GROUP*TIME E=TIME*SUBJ(GROUP);
The results are like these:
For 'group' factor, DF=2, F=1.47, P=0.25.
For 'Time' factor, DF=1, F=22.45, P< .0001.
For 'group*time' interaction, DF=2, F=0.03, P=0.97.
I made a conclusion from
the results that all three treatments work well, since post-treatment
scores are significantly higher than baseline scores. but these three
treatment groups don't show differences on treatment effect.
Here are my questions:
1. May I call this analysis as "RM ANOVA"? what's the proper format to
report these results for publication purpose? particularly, how should
I report degree of freedom (DF)?
2. after this analysis, how can I do a power analysis to show that the
sample size here is large enough to detect the real difference?
Thanks in advance for your reply.