[Rasch] Misfitting Individuals

Twing, Jon jon.s.twing at pearson.com
Mon Apr 30 10:24:48 EST 2007


This is good information.  Thanks for sharing with the group.

-Jon

**************************************************************
Jon S. Twing, Ph.D.
Executive Vice President, Test & Measurement Services
 
Pearson Educational Measurement
2510 N. Dodge Street, P.O. Box 30, Mailstop 165
Iowa City, Iowa  52245-9945
Phone: 319-339-6407
Fax: 319-339-6477
Cell: 319-331-6547
 
Jon.S.Twing at Pearson.com
http://www.pearsonsolutions.com/testmeasure/index.htm
**************************************************************
 
-----Original Message-----
From: curt0016 at flinders.edu.au [mailto:curt0016 at flinders.edu.au] 
Sent: Sunday, April 29, 2007 5:15 PM
To: Twing, Jon
Cc: Petroski, Greg; rasch at acer.edu.au
Subject: RE: [Rasch] Misfitting Individuals

Greg, Jon, and Others,

I agree broadly with Jon's approach, that is (1)identify anomalies, (2)
delete
them from calibration, but (3) use them if you must for generating a
score -
but be suspicious of the score.

Where I disagree with Jon is over the mean square fit values that I
would use
for identifying anomalies. Peter Boman and I found that you need to be
quite
lenient with fit statistics - we eventually decided on the range 0.5 or
0.6 to
1.6 or 1.8, and we looked at both infit and outfit values.

The problem is that residual-based fit statistics lack power. In the
case of
person fit, you really need a scale with about 50 degrees of freedom (50
dichotomous items, 17 items with four response categories) to get
reasonable
person fit values. When you have a scale with fewer degrees of freedom,
you
need to be even more lenient in identifying anomalies.

See Curtis, D. D. & Boman, P. (2004). The identification of misfitting
response
patterns to, and their influences on the calibration of, attitude survey
instruments. 12th International Objective Measurement Workshop. Cairns,
QLD.

Quoting "Twing, Jon" <jon.s.twing at pearson.com>:

> Greg:
> 
>  
> 
> This is often more art than science.  Here is what we sometimes do:
> 
>  
> 
> 1.)     Use Rasch Person Fit to identify "anomalies" in the testing
> experience (this could be pure guessing, cheating or other unusual
> student engagements).
> 
> 2.)     Since most of our work requires a student score, we will score
> them but we might choose to drop them from the calibration.
> 
> 3.)     In the "old days" we might have included an asterisk
indicating
> peculiar response string, but we have not done this in the last 10
years
> or so.
> 
> 4.)     Typically we use Mean Square Fit, INFIT and OUTFIT when
> diagnosing person anomalies.  We typically use the values dictated for
> items and apply them to persons.
> 
> 5.)     Below are the criteria I have collected over the years.
> 
>  
> 
> Item INFIT and OUTFIT should be between 0.60 and 1.40 (Bond & Fox,
2001;
> Linacre & Wright, 1999)
> 
> Item INFIT between 0.70 and 1.30 (Bode, Heineman, & Semik, 2000;
Bogner,
> Corrigan, Bode & Heinemann, 2000).
> 
> Mean-square fit statistics are defined such that the model-specified
> uniform value of randomness is 1.0.  Values greater than 1.5 (more
than
> 50% unexplained randomness) are problematic. (Wright and
Panchapakesan,
> 1969; Linacre, 1999).
> 
etc.




More information about the Rasch mailing list