[Rasch] Rasch applied rashly to quash innovation

Steven L. Kramer skramer1958 at verizon.net
Fri Dec 14 02:25:03 EST 2007


Dear Rasch community:

I am going to make a controversial and important claim, and provide evidence for it.  I'd like your response, indicating 
a. Am I wrong? and 
b. If I'm correct, what can be done about it?

Claim:
Adaptive testing in mathematics diagnosis and placement is punishing mathematics innovators and harming their students.  Inappropriate application of Rasch modeling is largely to blame.

The specific example I have in mind is the Core Plus mathematics curriculum and its interaction with the AccuPlacer college placement test--but the principle applies to other innovations and other tests.

Core Plus was one of the "integrated" high school mathematics curricula developed in the 1990s with U.S. National Science Foundation funding.  It teaches concepts in what the developers believed was a superior and easier to learn order than is traditional.  Compared to Traditional students, pilot testing indicated that first edition Core Plus students on average scored much higher on "pre-Calculus" and other advanced topics--but somewhat lower on fluent application of Algebra 1 procedures.  

Curriculum developers also consciously set out to reduce "barriers" denying students access to interesting mathematics they might learn.  
So, a student unable to consistently multiply 35 x 27 by hand might be given a pocket calculator, and might learn Algebra 1, Geometry, and Trigonometry quite well.

The harm from Accuplacer and other adaptive tests resides in inappropriately assuming that mathematics skill is unidimensional.  Mathematics items are ordered by difficulty on this uni-dimensional scale, and it is assumed that a student who is 75% likely to be able to solve a word problem that requires setting up simultaneous equations, or with a solid conceptual understanding of how multiplication by "i" can be conceived as a 90 degree rotation in the coordinate plain,  is almost certain to be able to multiply 35 x 27.  But for Core Plus students this assumption is not true.

At the extreme end, the result is this:  if students are started at the "difficult" problems in an adaptive test like Accuplacer, they place into Calculus.  If started on the "easy" problems, they fail enough so that they place into arithmetic.  Most testers start with the easiest end of the test and work up--so the Core Plus students and teachers are punished.

Similar applications of adaptive testing, e.g. by the Northwest Education Association, have been used for formative assessment--but they diagnose Core Plus students incorrectly.

Bottom line:  nothing in real life ever fits a model perfectly, and no test or learning construct is ever truly unidimensional.  But often, tests and constructs are close enough to "unidimensional" so that a Rasch model can be applied fruitfully.  Often, this uni-dimensionality is socially constructed, as when everyone has the same math concepts instructed in the same order, so that in typical situations with a typically instructed population something like "math skill" can be usefully treated as though it were uni-dimensional.

But in context of an innovation, this uni-dimensionality assumption is shattered.  And the "rebel" population is punished, because people are blindy over-applying the Rasch model--which in an adaptive testing environment that starts with typically 'easy' problems punishes non-conformists with low scores.

Am I correct?  And what should be done?

Steve Kramer
Arcadia  University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20071213/c3e18cc3/attachment.html 


More information about the Rasch mailing list