[Rasch] Rasch applied rashly to quash innovation

Mark Moulton MarkM at eddata.com
Fri Dec 14 09:05:35 EST 2007



I agree with your important claim, and it corresponds to my own experiences
in the education field.  Unidimensionality is often a "socially constructed"
artifact of students being exposed to the same curriculum.  Use of a
different curriculum exposes the fact that unidimensionality is not a
property of the test per se but of a given student sample, applicable only
to that sample.  The Rasch invariance property (invariance of item
difficulties across student samples) applies only when unidimensionality can
be shown to be a property of the test itself.  Those of us in the
educational testing field have gotten away with assuming unidimensionality
because students tend to be exposed to the same kind of curricula.  However,
in specific cases this assumption can lead to serious errors, and not just
in CAT.


In a Rasch context, there are two remedies:  a) only calibrate items with a
sample of students that is as diverse as the complete theoretical
population, including diversity of curricula; or b) only give tests to
students who received the same curriculum from which the tests were


However, these remedies are temporary workarounds.  The correct way to solve
the problem is to use a multidimensional model with the following


a)      Students and items are conjointly located in a common n-dimensional

b)      Each person/item expected value maximally uses all the data in the
data matrix.

c)      Item spatial coordinates are invariant across person samples.

d)      Person spatial coordinates are invariant across item samples from
the same n-space.

e)      Item and person coordinates can be generalized across tests.

f)       Items can be selectively administered to students to maximize
information about the student in a specified multidimensional space rather
than a 1-dimensional space. 


To my knowledge, multidimensional models with these properties have not been
applied to CAT.  They should be.  


For semi-related information, go to www.eddata.com <http://www.eddata.com/>


Mark Moulton

Educational Data Systems






-----Original Message-----
From: Steven L. Kramer [mailto:skramer1958 at verizon.net] 
Sent: Thursday, December 13, 2007 7:25 AM
To: rasch at acer.edu.au
Cc: Robin Marcus
Subject: [Rasch] Rasch applied rashly to quash innovation


Dear Rasch community:


I am going to make a controversial and important claim, and provide evidence
for it.  I'd like your response, indicating 

a. Am I wrong? and 

b. If I'm correct, what can be done about it?



Adaptive testing in mathematics diagnosis and placement is punishing
mathematics innovators and harming their students.  Inappropriate
application of Rasch modeling is largely to blame.


The specific example I have in mind is the Core Plus mathematics curriculum
and its interaction with the AccuPlacer college placement test--but the
principle applies to other innovations and other tests.


Core Plus was one of the "integrated" high school mathematics curricula
developed in the 1990s with U.S. National Science Foundation funding.  It
teaches concepts in what the developers believed was a superior and easier
to learn order than is traditional.  Compared to Traditional students, pilot
testing indicated that first edition Core Plus students on average scored
much higher on "pre-Calculus" and other advanced topics--but somewhat lower
on fluent application of Algebra 1 procedures.  


Curriculum developers also consciously set out to reduce "barriers" denying
students access to interesting mathematics they might learn.  

So, a student unable to consistently multiply 35 x 27 by hand might be given
a pocket calculator, and might learn Algebra 1, Geometry, and Trigonometry
quite well.


The harm from Accuplacer and other adaptive tests resides in inappropriately
assuming that mathematics skill is unidimensional.  Mathematics items are
ordered by difficulty on this uni-dimensional scale, and it is assumed that
a student who is 75% likely to be able to solve a word problem that requires
setting up simultaneous equations, or with a solid conceptual understanding
of how multiplication by "i" can be conceived as a 90 degree rotation in the
coordinate plain,  is almost certain to be able to multiply 35 x 27.  But
for Core Plus students this assumption is not true.


At the extreme end, the result is this:  if students are started at the
"difficult" problems in an adaptive test like Accuplacer, they place into
Calculus.  If started on the "easy" problems, they fail enough so that they
place into arithmetic.  Most testers start with the easiest end of the test
and work up--so the Core Plus students and teachers are punished.


Similar applications of adaptive testing, e.g. by the Northwest Education
Association, have been used for formative assessment--but they diagnose Core
Plus students incorrectly.


Bottom line:  nothing in real life ever fits a model perfectly, and no test
or learning construct is ever truly unidimensional.  But often, tests and
constructs are close enough to "unidimensional" so that a Rasch model can be
applied fruitfully.  Often, this uni-dimensionality is socially constructed,
as when everyone has the same math concepts instructed in the same order, so
that in typical situations with a typically instructed population something
like "math skill" can be usefully treated as though it were uni-dimensional.


But in context of an innovation, this uni-dimensionality assumption is
shattered.  And the "rebel" population is punished, because people are
blindy over-applying the Rasch model--which in an adaptive testing
environment that starts with typically 'easy' problems punishes
non-conformists with low scores.


Am I correct?  And what should be done?


Steve Kramer

Arcadia  University


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20071213/7e5995ac/attachment.html 

More information about the Rasch mailing list