[Rasch] How to address a stop rule problem and getting more participants

Timothy Pelton tpelton at uvic.ca
Tue Sep 29 03:07:00 EST 2009

Perhaps part of the problem is that the items are not as straightforward as they appear.

A 'simple' question like 6+7 (couched in a context appropriate for a K-2 student) may become easier relative to other questions as the children gain experience.

A pre-K or K student might only be able to solve the problem if they can transfer it to a concrete situation (e.g., beans on the table) - counting out a pile of 6 and a pile of 7 - combining them and then counting all. It is a real and novel problem that requires the child make sense of the situation and depends on perseverance and confidence.

As they are exposed to other problems and solutions processes they begin recognize the nature of the problem and approach it by counting on, or counting on from larger (using objects, fingers, pictures etc.) - still a bit brute force - but now using their prior experience to help them trim down the task and reducing the potential for some kinds of errors.  This probably requires less perseverance and confidence

Then as they build their number sense, they are able to decompose the numbers and recognize either that 6=5+1 and 7=5+2 and see that the answer is 5+5+ 1+2 = 13 (a derived fact) or that you just need 3 more to make 7 up to 10 and 6=3+3 (another derived fact) or they might recall that 7+7 =14 and take away 1 to make it 6+7 (a near fact).  The nature of the problem has changed from understanding the problem and applying a brute force counting algorithm to puzzling with numbers and finding relationships that allow them to find the answer they are seeking in a more reasoned way.

And eventually children will recognize the pair and remember that 6+7 is 13  (recalled fact) - no longer much of a problem at all and trivial for some.
(these ideas have been well examined in Carpenter and Fenema's Cognitively Guided Instruction Theory)

Thus it seems that we cannot assume that a particular problem in mathematics (and indeed many other areas) is going to be the same level of difficulty - or that it is even the same problem - as students progress through the educational process.  And we can't assume that the difficulty of every problem or problem class evolves or drifts in difficulty at the same rate.

What to do? Perhaps we need to consider a bit more than just the child's answer to a problem.  Perhaps we need to classify their approach and then calibrate problem-approach combinations on the scale of difficulty.  Or perhaps we could define several different constructs that underly mathematics ability (e.g., perseverance, recalled experience, and number sense) and work to support and describe progress on each of these scales.

Tim Pelton
Associate Professor
Faculty of Education
University of Victoria
office: 250-721-7803
fax: 250-721-7598
From: rasch-bounces at acer.edu.au [rasch-bounces at acer.edu.au] On Behalf Of Chris Wolfe [cbwolfe at buffalo.edu]
Sent: Saturday, September 26, 2009 2:19 PM
To: rasch at acer.edu.au
Subject: [Rasch] How to address a stop rule problem and getting more    participants


First, we will summarize our current situation (please recall that this
was created by the variegated needs of a string of projects, not by an
apriori planning process!). Then we will pose the question and possible
solutions for your critique.


1. Over 5 projects and 10 years, we have developed and used different
versions of the instrument.  These versions differed with the inclusion
of more complex items to capitalize on growth in mathematical skill
through second grade.

2. Mostly used for pre-K, we applied it to the Rasch model to garner
Rasch's advantages and to eventually extend it to serve as a single
instrument used in (more recent) longitudinal work. We used "start" (6
correct in row to constitute a base) and "stop" rules (4 wrong in a row
to constitute a ceiling) based on the Rasch ordering.

3. As we added items to allow the test to be used with older children,
we piloted these items first, and then added them to the assessment.
Unfortunately, as we did, some of the theoretical ordering of items at
the higher end of the scale was not consistent with Winstep's difficulty
ordering.  But, crosstabs of students who took both items *did* confirm
the theoretical sequence. We suspected the stop rule was "messing this up."

4. Rasch expert and Winsteps author Mike Linacre agreed and suggested,
"Since the last 5 responses before the "4 wrong" test stops must be
10000, there is a dependency between this response string and the
missing data.  If the last 5 responses for each person are removed from
the estimation of item difficulties, the dependency between the stopping
rule and the item difficulties almost disappears. You may like to
experiment with this in your item-difficulty estimation."

5.  In addition, most of the children assessed so far are within the
PreK-First Grade range leaving very few children who have answered
questions at the upper end of the test (both a problem with the stop
rule and as function of mathematical skill set).

6. We wish to have the test properly ordered, and are about to collect
new data to fix the ordering. We wish to ensure we do this correctly and
intelligently, so we can (a) accurately score the extant data (on 1000s
of pre-K kids followed through 1st grade), and (b) correctly order the
items for future use of the instrument and so forth.

*Proposed Solution:*

Our thought is that we have adequate numbers of children at the lower
end, and our problem is a less representative sample at the higher end.
We do not wish to test children by giving them all items, our concern
being that there are 100s of items, and we do not wish to frustrate the
lower-ability/younger children nor bore the higher-ability/older children.

Therefore, we propose the following:

A.  Administer the entire upper-end of the test to Kndg. to 2nd grade
children (using the "start rule" or item that worked for our previous
testing) even though it will be bit tedious and frustrating to some
(over multiple sessions as necessary).

B. Combine these new data with our existing data set/ with the last 5
responses eliminated,/ as described previously.

C. Run Winsteps, stacking the data, to determine item difficulty.

D. Treat those item difficulties as fixed/anchors and use those item
difficulties to compute Rasch scores for extant and future data.

*Questions re: This Proposed Solution:*

a.  Are we potentially compounding the problem by continuing to use a
start rule?  Does the fact that our current population of scores are
heavily weighted to the lower portion of the test compensate for the
upcoming data collection focus on the high end?

b.  What is the best way to determine the optimal number of children
needed to round out this portion of the test?

We would really appreciate any insights that the members of this
listserv could provide.


    Doug Clements & Julie Sarama

            * * *  > > > > * * * < < < <  * * *

SUNY Distinguished Professor / Associate Professor
University at Buffalo, The State University of New York
Department of Learning and Instruction
Graduate School of Education
212 Baldy Hall
Buffalo, NY  14260

E-mail: clements at buffalo.edu            JSarama at buffalo.edu

Please consider the environment before you print
Rasch mailing list
Rasch at acer.edu.au

Please consider the environment before you print

More information about the Rasch mailing list