[Rasch] Relative Examiner severity after group anchoring in FACETS

Doug Lawson lawsondrdm at gmail.com
Thu May 12 23:01:09 AEST 2016

Good day,

After having run hundreds of OSCEs through facets, I now use quality
assurance examiners to link the stations and multiple tracks.  These
examiners rotate the opposite direction from candidates, so that they don't
simply follow them station after station, and then across tracks.
Otherwise the subset problem is an issue and you can't compare measures
across the subsets.

Anchoring the examiners does get rid of the subset problem but it doesn't
mean you can compare subsets; unless you can demonstrate that the anchoring
is, in fact, accurate.  The subsets probably don't make a huge amount of
difference, but that is why we use Rasch - to have more accurate measures.

If there are a lot of examiners in each site, the anchoring might make
sense.  I had a problem once where one of the sites (different culture)
held an attitude that they were better and thus all the examiners scored
the students higher (no proof, it just felt that way). To anchor those
examiners would have pulled the scores down for the students; but were the
students actually better?  Multiple sites are hard to link, unless you can
video some candidates and link the sites that way.

Hope this helps. It might be best to report the scores for anchored and
independent and compare. It could identify the students sitting on the
decision point and ensure that the scoring is reviewed for appropriate
students before pass/fail decisions are made,


On Thu, May 12, 2016 at 1:55 AM, Imogene Rothnie <
imogene.rothnie at sydney.edu.au> wrote:

> Hello all, here is one for the FACETS users.
> Just wanting to confirm the following, or tell me if I have misunderstoodŠ
> I have an OSCE with examiners nested in sites and stations , and students
> nested in sites, although students and stations are crossed.
> I always get a subset disconnection that mostly equates to sites, apart
> from a few odd examiners who make their own subset (!)
> When I group anchor on Examiners , I always get Œsubset connection OK¹.
> HOWEVER , this just means each group now has an anchored mean of zero yes?
> And an examiner severity measure of 1.0 in group A, would only be 0.5
> logits more severe than examiners with 0.5 logit severity IN THAT GROUP.
> Examiner severity across subset groups even after subset anchoring does
> not mean a severity logit of 1.0 in Group A = examiner severity logit of
> 1.0 for an exminer group B. Yes?
> But what of student measures? If they are crossed across stations and does
> that provide enough rectangulation to consider their measures across the
> entire cohort even though they are also nested in site?
> Many thanks for your opinions..
> IMOGENE ROTHNIE B.A./B.Sci. Grad. Dip. (Psych.)  M.ED. (Assess & Program
> Evaluation)
> Senior Lecturer, Assessment
> Education Office (Tuesday ­ Friday)
> Sydney Medical School
> Rm 108, Edward Ford Building (A27) | The University of Sydney | NSW | 2006
> T +61 2 9036 6434  | M +61 418 381 359
> (mobile best contact)
> >
> ________________________________________
> Rasch mailing list
> email: Rasch at acer.edu.au
> web:
> https://mailinglist.acer.edu.au/mailman/options/rasch/lawsondrdm%40gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20160512/db13943b/attachment.html>

More information about the Rasch mailing list