[Rasch] Help with equating partial credit items in pre - post test

Stephen Humphry stephen.humphry at uwa.edu.au
Mon Jun 2 12:53:16 EST 2008


Mike. If there is a different unit, in principle the responses in frames of
reference that have different units (due to different levels of
discrimination) need to be modelled separately first then brought onto a
common scale. Higher discrimination produces responses for which the Guttman
pattern is more likely, other things being equal. I've shown cases in which
differences in the levels of discrimination produce very misleading
inferences in empirical situations if they are not dealt with appropriately.
 
Evidence of different levels of discrimination, from tests of fit, does not
(necessarily) imply dependence though. It depends what you count as
"overfit".
 
Steve

  _____  

From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
Of Mike Linacre (RMT)
Sent: Sunday, 1 June 2008 8:44 PM
To: rasch at acer.edu.au
Subject: [Rasch] Help with equating partial credit items in pre - post test


Good questions, Gregory.

You wrote: "Overfitting items ... does the fit help us with detecting them?
What happens if you find them?"

Fit is relative so, in any usual set of items, about half the items will
overfit and half will underfit. Underfit and overfit are usually easy to
detect in Rasch analysis.

Conspicuous underfit damages the usefulness of the measures through
unmodeled noise (unpredictability). This lack of predictability also lessens
the usefulness  of an equating relationship in an empirical situation. We
are not really sure that measure X on one instrument corresponds to measure
Y on the other instrument.

Conspicuous overfit (Guttman patterns) stretches out the measures along the
logit variable, so overstating reliability. But, in equating situations of
the Fahrenheit-Celsius variety, we know that one set of logit measures
stretches out the variable relative to the other set of measures, so overfit
really doesn't matter. It will merely change the equating slope. In fact, we
will be more sure that a measure of X on one instrument corresponds to a
measure of Y on the other instrument in empirical situations.

It would be interesting to find a Paper that demonstrates that overfit to a
Rasch model really does lead to misleading inferences in empirical
situations (apart from reliability coefficients).

Mike L.

At 6/1/2008, Stone, Gregory wrote:


I've done this several times when the N is too small.  My question regards
local independence.  When I've done this in the past, I've (we've) assessed
the items for overfit and exceptionally high point-biserials.  It isn't
perfect, but it can give us an idea as to whether or not the requirement of
local independence is met, even for the calculation of item difficulties.
Some journals are requiring this to be done.  So ...

Is this really important in the estimation process for the purpose of
equating?  (local independence)

Overfitting items ... does the fit help us with detecting them?  What
happens if you find them?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20080602/0ed9ed83/attachment.html 


More information about the Rasch mailing list