FW: [Rasch] Rasch applied rashly to quash innovation

Stephanou, Andrew Stephanou at acer.edu.au
Fri Dec 14 08:22:28 EST 2007


________________________________

From: Haans, A. [mailto:A.Haans at tue.nl] 
Sent: Friday, 14 December 2007 2:41 AM
To: Steven L. Kramer; rasch
Cc: Robin Marcus
Subject: RE: [Rasch] Rasch applied rashly to quash innovation


Just some quick thoughts,

 

In general, one should not blindly over-apply models (over-doing
something seems wrong by definition), and certainly not when the model's
predictions do not fit a person's response vector. Also: Core-plus
students can solve a multiplication like 35 * 27 (if simply given a
calculator). Conclusion: mathematics ability can only be truly
unidimensional when the item difficulties are the same regardless of the
technique used to solve an item (i.e., with or without a calculator).
That is, the order of the item difficulties should be independent of the
method used to solve a problem (cheating not allowed, of course).
Interestingly, true math geniuses often use non-standard methods to
solve a problem.

 

Kind regards,

 

Antal 


________________________________

From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On
Behalf Of Steven L. Kramer
Sent: donderdag 13 december 2007 16:25
To: rasch at acer.edu.au
Cc: Robin Marcus
Subject: [Rasch] Rasch applied rashly to quash innovation


Dear Rasch community:
 
I am going to make a controversial and important claim, and provide
evidence for it.  I'd like your response, indicating 
a. Am I wrong? and 
b. If I'm correct, what can be done about it?
 
Claim:
Adaptive testing in mathematics diagnosis and placement is punishing
mathematics innovators and harming their students.  Inappropriate
application of Rasch modeling is largely to blame.
 
The specific example I have in mind is the Core Plus mathematics
curriculum and its interaction with the AccuPlacer college placement
test--but the principle applies to other innovations and other tests.
 
Core Plus was one of the "integrated" high school mathematics curricula
developed in the 1990s with U.S. National Science Foundation funding.
It teaches concepts in what the developers believed was a superior and
easier to learn order than is traditional.  Compared to Traditional
students, pilot testing indicated that first edition Core Plus students
on average scored much higher on "pre-Calculus" and other advanced
topics--but somewhat lower on fluent application of Algebra 1
procedures.  
 
Curriculum developers also consciously set out to reduce "barriers"
denying students access to interesting mathematics they might learn.  
So, a student unable to consistently multiply 35 x 27 by hand might be
given a pocket calculator, and might learn Algebra 1, Geometry, and
Trigonometry quite well.
 
The harm from Accuplacer and other adaptive tests resides in
inappropriately assuming that mathematics skill is unidimensional.
Mathematics items are ordered by difficulty on this uni-dimensional
scale, and it is assumed that a student who is 75% likely to be able to
solve a word problem that requires setting up simultaneous equations, or
with a solid conceptual understanding of how multiplication by "i" can
be conceived as a 90 degree rotation in the coordinate plain,  is almost
certain to be able to multiply 35 x 27.  But for Core Plus students this
assumption is not true.
 
At the extreme end, the result is this:  if students are started at the
"difficult" problems in an adaptive test like Accuplacer, they place
into Calculus.  If started on the "easy" problems, they fail enough so
that they place into arithmetic.  Most testers start with the easiest
end of the test and work up--so the Core Plus students and teachers are
punished.
 
Similar applications of adaptive testing, e.g. by the Northwest
Education Association, have been used for formative assessment--but they
diagnose Core Plus students incorrectly.
 
Bottom line:  nothing in real life ever fits a model perfectly, and no
test or learning construct is ever truly unidimensional.  But often,
tests and constructs are close enough to "unidimensional" so that a
Rasch model can be applied fruitfully.  Often, this uni-dimensionality
is socially constructed, as when everyone has the same math concepts
instructed in the same order, so that in typical situations with a
typically instructed population something like "math skill" can be
usefully treated as though it were uni-dimensional.
 
But in context of an innovation, this uni-dimensionality assumption is
shattered.  And the "rebel" population is punished, because people are
blindy over-applying the Rasch model--which in an adaptive testing
environment that starts with typically 'easy' problems punishes
non-conformists with low scores.
 
Am I correct?  And what should be done?
 
Steve Kramer
Arcadia  University
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20071214/5aa1196e/attachment.html 


More information about the Rasch mailing list