[Rasch] scale shrinkage or expansion

David Andrich D.Andrich at murdoch.edu.au
Tue Dec 13 12:31:34 EST 2005


Matt. That is exactly the subject of two Doctoral Studies here at
Murdoch. Papers are being prepared and submitted for publication.

David

David Andrich, BSc, MEd (UWA); PhD(Chic), FASSA
Professor, School of Education 
Murdoch University 
Murdoch, Western Australia 6150 
Email: andrich at murdoch.edu.au
Phone +61 8 9360 2245 
Fax +61 8 93606280 
 http://www.education.murdoch.edu.au/educ_RaschCourse2005.html
 


-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On
Behalf Of matt.schulz at act.org
Sent: Tuesday, 13 December 2005 12:10 AM
To: rasch at acer.edu.au
Subject: [Rasch] scale shrinkage or expansion



Fellow Raschers:

I received the following inquiry recently:

   Are you aware of any research in which a linear transformation was
used
   in the equating
   of Rasch item parameters?  While I realize that this practice is
counter
   to the traditional
   Rasch approach, I wondered how one addressed differences in the
   variability of item difficulties
   from one administration to another.  Do you know of any testing
programs
   that utilize a linear
   equating in a Rasch setting?  If so, do you know of any published
   studies addressing the impact
   of this non-traditional adjustment on the ability estimates?

I've had some experience with this, but not much.  One time, it seemed
that the effect of a low-vision rehabilitation program was to increase
the standard deviation of the item calibrations.  I suppose whenever
this happens, one looks for substantive explanations.  The items were
skills taught in the program and the question was "how hard is it for
you to...."
(easy, moderate, hard).   Patients probably "understood" the skills
differently after experiencing the program.   As a result, they may have
been willing to give more extreme responses--"easy" and "hard"--to the
skills.  This would have spread out the items and/or pulled the
threshold calibrations in towards zero.

While such a thing could be explained with an attitude or rating scale
measure, I think it would be harder to explain with dichotomously-scored
items on an educational achievement test.  I would be concerned that it
wasn't a technical artifact of an increase in the variability of person
measures?

Any comments, experience, or literature to share?  Feel free to respond
to me directly.

Matthew Schulz
Department of Statistical Research
ACT, Inc.
Ph. 319-337-1468
matt.schulz at act.org


_______________________________________________
Rasch mailing list
Rasch at acer.edu.au http://listserv3.acer.edu.au/mailman/listinfo/rasch



More information about the Rasch mailing list