[Rasch] inter-rater agreement

shirin shirazi shirin.shirazi at gmail.com
Sat Jan 12 06:59:07 EST 2013


Dear Professor Hess,

Thanks a lot for your helpful answer.

 I read in an article "unlike the traditional concept of inter-rater
reliability, the reliability of rater separation signifies consistent
pattern of rater disagreement, in other words, the higher the value, the
lower inter-rater reliability." Is this statement correct? Running Facets
we got rater measurement report in which we have this rater separation
index and as well as inter-rater agreement opportunities followed by
observed and exact agreement. My question is these two pieces of
information are reporting the same result or not?


Great Regards,

Shirin



On Fri, Jan 11, 2013 at 10:39 PM, Robert Hess <Robert.Hess at asu.edu> wrote:

> Shirin,****
>
> The issue is not as simple as the general notion of inter-rater
> reliability in a classical sense. There are actually two different levels
> of questions that need to be examined. The first we might interpret as
> between scorer (or rater) consistency. This refers to the extent to which
> all raters in general tend to score high ability students as high, medium
> ability students as medium, and low ability students as low (somewhat
> similar to the notion of inter-rater reliability but not really the same).
> The second refers to within-rater issues. In this area the concerns are
> tri-fold: 1) rater consistency: is a rater consistent in their scoring
> (always scoring high ability students as high, low ability students as low,
> etc.); 2) rater severity: is a rater more severe on students than other
> raters or is a rater more lenient than other raters; & 3) if scores are
> based on more than one trait is there a halo effect – in other words, does
> the first trait scored dictate the level of scores receive by following
> traits.****
>
> There are a whole plethora of articles available that can describe these
> influences in much greater detail.****
>
> ** **
>
> Robert Hess****
>
> Emeritus Professor of Measurement and Evaluation****
>
> Arizona State University****
>
> ** **
>
> *From:* rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] *On
> Behalf Of *shirin shirazi
> *Sent:* Friday, January 11, 2013 7:11 AM
> *To:* Rasch
> *Subject:* [Rasch] inter-rater agreement****
>
> ** **
>
> Dear List Members,
>
> I have a question about inter-rater agreement obtained through Facets
> analysis.
>
> It is said in the literature if observed inter-rater agreement > expected
> agreement, then raters behave like rating machines (higher inter-rater
> reliability); if we take it right, then what if separation and reliability
> of the same sample are high, then can we interpret it as low inter-rater
> agreement? should they be in line with each other or do they report two
> distinct issues?
>
>
> Your answers are highly appreciated
>
> Shirin****
>
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/shirin.shirazi%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130111/788e6150/attachment.html 


More information about the Rasch mailing list