|Date: ||Mon, 17 Jun 2002 07:42:53 +0200|
|Reply-To: ||Annette <anne5432@UNI.DE>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Annette <anne5432@UNI.DE>|
|Subject: ||more stats related: controlling for type i error in table analysis|
|Content-Type: ||text/plain; charset="iso-8859-1"|
What do you think: Is controlling for type I error in table analysis
necessary? I am just interested in the theoretical view of the problem.
There's no study, experiment, or publication involved.
Generally asked: Is there a tradition at all for doing multiple/pairwise
tests in a
contingency table, or for controlling the type I error? Most of the time,
people state whether there is an effect overall or not, then eyeball the
frequencies in the table and comment on where the biggest differences are.
To be more specific, I'll sketch two scenarios:
scenario1: Suppose you have a 3 (A,B,C) x 3 (X,Y,Z) table. Should there be a
control for type I error (because of the multiple comparsons in each row or
scenario2: Suppose you have a 3 3level variables (A,B,C), (X,Y,Z), and (I,
II, III). First one investigates table (A,B,C) x (X,Y,Z), then (A,B,C) x (I,
II, III), and finally (X,Y,Z) x (I, II, III). Should there be a control
for type I error (because of the multiple comparsons between each table,
I apologise for possible cross-postings with STAT-L.
Thank you very much in advance,