[Rasch] Unidimensionality (follow up with the practical issue)

STEVEN KRAMER skramer1958 at verizon.net
Fri May 9 00:20:44 EST 2008


I'm trying to prepare testimony about Maryland's state testing in 
mathematics.  We test students every year (as required by U. S. law) and 
there are a lot of problems specific to these particular tests.  But there 
is an underlying issue I want to be able to address confidently if asked.

There seems to be a lot of consensus about what kids should know by the end 
of eighth grade, but many competing routes to get there.   The state (who 
writes the test) is not the agency that chooses curriculum and certainly not 
the agency that writes curriculum.  Some texts are designed as 
smorgasboards:  everything offered every year, taught in isolation without 
much connection across topics, so that teacher can pick and choose what 
topics they want to focus on when.  Other texts are coherent curricula with 
a logical and connected path through the middle school material.  The 
coherent curricula have been developed over time and with considerable 
piloting.  Examples of carefully piloted and coherent curricula include 
texts loved by the "conservative" side of the math wars (e.g., the Singapore 
curriculum) and by the "liberal" side (e.g., Connected Mathematics).  But 
the coherent curricula cannot teach topics in the same order as the state 
tests, unless the state criterion reference tests are written directly to 
that particular curriculum (and states would probably be sued if they wrote 
a test directly to one particular curriculum).  So the testing regime, with 
yearly criterion-referenced tests, puts a large thumb on the scale towards 
teaching in a piecemeal and incoherent way and towards using a haphazardly 
designed "smorgasboard" textbook.

I was wondering whether Rasch modeling could help with this.  What if one 
test were designed to place sixth, seventh, and eighth grade students on a 
single scale?  "Proficiency" would be defined as a different and more 
difficult "theta" each year.  The Rasch scale would line up test items in an 
order of difficulty most common in the state, and the "cut points" could be 
defined at whatever state officials defined as being "how far students 
should have gotten by this year".  But if a school were using a coherent 
curriculum that taught in a different order from what was usual, then 
students would get some "difficult" items right and some "easy" items wrong, 
but overall get the same score as others whose curriculum order matched the 
Rasch model's.  The Rasch-designed test would differ from our current 
criterion-referenced tests in that it would cover a wider range of topics 
each year, thus forcing less "shoe-horning" into the specific topics chosen 
by the state for that particular grade level.

But using a Rasch model this way would require treating the data as though 
it were unidimensional, even though we have strong reason to believe that it 
is not. So that is why I need better to understand what happens when you 
extract a Rasch Dimension from what is truly a multi-dimensional situation.

Steve Kramer 





More information about the Rasch mailing list