[Rasch] Number of categories

Stone, Gregory gstone at UTNet.UToledo.Edu
Sun Jan 3 05:13:35 EST 2010


Purya

That may be the case, and thus I suggest you try to determine whether or not respondents can make clear distinctions between categories added at the top.

For example, there are many examples of items that make use of the traditional "Strongly Disagree" to "Strongly Agree" scale.  Depending upon the way in which the statement is written, most if not all respondents may select "Strongly Agree."  This could suggest that if we could better elaborate levels of Agreement we might be able to more finely tune the responses.  This would however be true only if the following conditions were met:

(1) The respondents did not, in fact, mean they agreed with the top of the scale.  If they did, adding four more points would simply lead them to choose the top of those new four categories.

(2) The categories could be well-defined so that the differences among the categories could be understood and used consistently by the respondents.  For instance, if we took "Agree" to "Strongly Agree" and decided to add four categories, we would need to ensure that the four categories could be elaborated with clear and unambiguous terms that all respondents could understand.  This may be harder to accomplish than it seems.  Simply adding numbers without descriptions is not likely to work.  Instead, descriptions are likely necessary. If neither approach works, you will then wind up collapsing the categories into the original four (or some incarnation of the same) anyway - and since collapsing always carries with it certain assumptions that may not be true, in this case it would be better to have left them the same.

Why not add the four or five points to the scale and try piloting the revised instrument with 25 or so people.  That should be enough of a sample to determine whether or not he scale is working at all. 

I have found that most problems such as the one you described, are better solved by considering the construct/variable and the item arrangement.  What does the gap represent?  Is it simply an artifact of an eccentric item or two on the extreme of the distribution?  Are the items meaningfully at the extreme as well as statistically?  If so, what do they mean?  What are the commonalities?  What sorts of concepts could be covered by items written to assess the gap, therefore rendering the very extreme items not so extreme?  Personally, I find it of great importance to evaluate the meaning, the content, the variable, and the intent at the same time, as each can help inform the other.

All the best,
Gregory 

Gregory E. Stone, Ph.D., M.A.

Associate Professor of Research and Measurement
Judith Herb College of Education   University of Toledo, MS #921
Toledo, OH 43606   419-530-7224

\Board of Directors, American Board for Certification of Teacher Excellence     www.abcte.org
For information about the Research and Measurement Programs at The University of Toledo and careers in psychometrics, statistics and evaluation, email gregory.stone at utoledo.edu.




-----Original Message-----
From: rasch-bounces at acer.edu.au on behalf of Purya Baghaei
Sent: Sat 1/2/2010 3:31 AM
To: Kenji Yamazaki
Cc: rasch
Subject: Re: [Rasch] Number of categories
 
Gregory,

Since there are 5 categories on the scale many respondents who are

higher than category 5 on some of the items all should be rated 5.

That is, the number of categories doesn't allow a finer distinction

among these respondents. Increasing the number of categories results

in the endorsement of different levels of the scale by higher ability

respondents. I assume this results in more variation in the total raw

scores for items and gives a wider spread of item estimates and covers

the empty regions of the scale. The distance between the last two

thresholds is more than 4 logits. So the respondents seem to be able

to distinguish more categories.

Regards

Purya





-------------------------------------------------
Please consider the environment before you print
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100102/873565a3/attachment.html 


More information about the Rasch mailing list