[Rasch] Rating scale and partial credit model with identical responses
tpelton at uvic.ca
Tue Jun 27 02:54:45 EST 2006
Say Mike you sure are a generous person - thanks for your willingness to help
One more point on the unlikely set of results provided.
Such could happen if you have a rating scale where the range of achievement
represented by the rating
scale is very broad (or that the criteria are very specific)- meaning that the
raters would have to be very
incompetent to report different ratings (or that the proportion of debatable
person performances was
very small). The problem with this type of instrument (or any instrument that
produces perfect Guttman
data - or nearly so) is that there is no (or insufficient) overlap between the
ratings distributions to
support an accurate estimation of the relative scale distance between the
ratings or the items.
If you have such ratings results - the information is useful (i.e., you have a
hint that something may be
amiss with your rating system) it just doesn't contribute to the estimation
of a measurement scale.
>===== Original Message From Mike Linacre <rmt at rasch.org> =====
>Thank you for your question - it addresses an important issue in
>These "perfect" response strings are a problem for any type of analysis!
>These response strings are like putting each student's MCQ bubble sheet
>through an optical scanner 5 times, and then thinking that you had 5
>observations of each student's responses to each item.
>If you really have a dataset like this, then 4 of the 5 judges are
>redundant, and their data should be eliminated. It provides no extra
>You would then have only one statistically-independent observation of each
>student. This is enough to order the students, but not enough to construct
>a verifiably linear measurement system.
>At 6/26/2006, you wrote:
>>If on a set of polytomous items or Likert scale items students get the
>>same score on each individual item, is it a problem for Rasch analysis?
>>Suppose, the response vectors for 5 polytomous items for 5 respondents
>>And this pattern continues for all respondents.
>>This could well be the case when there is complete agreement among
>>judges in rated performances, which could be the result of good training
>>and the existence of a well defined rating scale.
>>Tried the new MSN Messenger? It's cool! Download now.
>>Rasch mailing list
>>Rasch at acer.edu.au
>Mike Linacre, mike at winsteps.com
>Winsteps, PO Box 811322, Chicago IL 60681-1322
>Tel. & FAX (312) 264-2352
>Facets Introductory Workshop: www.winsteps.com/facwork.htm - July 28-29, 2006
>Winsteps Introductory Workshop: www.winsteps.com/workshop.htm - Aug. 7-8,
>Winsteps Online Course: www.statistics.com/courses/rasch - July 21 - Aug.
>Winsteps: www.winsteps.com/winsteps.htm - current version 3.61.1 - May 2006
>Facets: www.winsteps.com/facets.htm - current version 3.61.0 - May 2006
>Your comments and questions are welcomed.
>Rasch mailing list
>Rasch at acer.edu.au
Department of Curriculum and Instruction
University of Victoria
PO Box 3010 STN CSC
Victoria BC V8W 3N4
Fax (250) 721-7598
More information about the Rasch