Date: Fri, 12 Nov 2004 08:36:52 -0500
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Art Kendall <Arthur.Kendall@verizon.net>
Organization: Social Research Consultants
Subject: Re: Negatively worded questions and Reliability
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
In an effort to reduce response bias it is customary to word half the
items are worded so items with high values on the response scale
indicate one end of the underlying construct, and half indicate the
other end of the underlying construct.
When you reflect the scores on the item you cite, it sounds like the
underlying construct would be something like teacher confidence.
When a response key change results in a higher internal consistency
result that means it is more likely the correct key.
Look at the inter-item correlation in the first analysis. There are
likely to be sizable negative correlations.. Also take a look at the
corrected item total and squared multiple correlations associated with
each item. Large inconsistencies when comparing these two columns are
an indicator that items are incorrectly keyed.
Hope this helps.
Social Research Consultants
University Park, MD USA
Ola S. Rostant wrote:
>I have a Likert scale with some negatively worded questions such as "I
>am afraid I may be a failure as a teacher" (1 = strongly disagree, 2 =
>disagree, 3 = agree, 4 = strongly agree).
>A colleague of mine suggested I reverse code questions like these. Does
>anyone have an opinion on whether this is a plausible thing to do?
>I ran a reliability analysis on the entire survey and the alpha was .53,
>when I reverse coded the negatively worded questions, the reliability
>shot up to .89.
>All opinions and advice are welcome.