[Rasch] A practical one

Mark Moulton MarkM at eddata.com
Tue Oct 11 11:48:05 EST 2005


Dear John,

To separate out the effect of student growth from the effect of repeated
exposure (which is real and measurable), one needs a test which has at least
some items that are different across admins.  Then one can treat it as a DIF
problem, the two DIF groups being Repeaters and Non-Repeaters.  

This leads to several options.  One is to calibrate the items using only
Non-repeaters.  Use these calibrations to anchor only the items that are
different across admins.  Letting the common items float, run using only
Repeaters.  For the common items, graph their difficulties from the first
run against the second run.  The mean change in difficulty is the "repeater
effect."  Subtract this from the repeater student's observed growth,
computed using the common items anchored across runs, to get the student's
"true" growth.

The WinSteps DIF tables yield shortcuts to the same process.

A 3-facet approach is also possible, and probably the most elegant:  Facet 1
= student, Facet 2 = item, Facet 3 = exposure (e.g., not exposed, exposed
once, exposed twice), Datum = performance on item.  Analyze using data from
all admins.  This approach works to the degree that all facet combinations
are represented in the data, each person/item having multiple exposures.
The result will be a facet parameter that captures the "exposure effect" and
student measures that have had the exposure effects purged from them.

I've used variants of both approaches in dealing with the same problem, with
reasonable success.

Mark Moulton
markm at eddata.com
Educational Data Systems



-----Original Message-----
From: John Barnard (EPEC) [mailto:JohnBarnard at bigpond.com] 
Sent: Sunday, October 09, 2005 8:33 PM
To: Rasch list
Subject: [Rasch] A practical one

Dear all

I would appreciate some opinions on the following scenario.

The same paper was given on two occasions. Some 20% of students (let's
call them the repeaters) who sat the first round (and failed) also sat
the second round. In round 1, the repeaters' mean ability is (say) 0.5
logits less than the non-repeaters'. After anchoring the item
difficulties in the first round and using them in round 2 resulted in
the repeaters now doing significantly better (say 0.7 logits on
average). Also the repeaters now have approx the same mean ability as
the non-repeaters in round 2, say 0.8 logits.

The question is this: If the repeaters' mean ability increased by 0.7
logits, how can one account for this (taking the same paper again) to
not unfairly advantage the repeat group in round 2? (I am aware of
learning and other factors, but let's ignore that for the moment.)

Kindly
John 

John J Barnard
Executive Director: EPEC Pty Ltd
www.users.bigpond.com/JohnBarnard/

DISCLAIMER:
The contents of this e-mail which may include one or more attachments,
is confidential and is intended for the use of the named recipient(s).
If you have received this e-mail in error, you are not permitted to and
must not disclose, distribute or retain it, and are requested to notify
the sender by return e-mail and delete it thereafter.

It is the responsibility of the recipient(s) to ensure that the e-mail
is virus free. Although EPEC uses the latest antiviral software, we do
not accept responsibility for any problems caused by viruses. 



_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
http://listserv3.acer.edu.au/mailman/listinfo/rasch




More information about the Rasch mailing list