# [Rasch] How to address a stop rule problem and getting more participants

Chris Wolfe cbwolfe at buffalo.edu
Sun Sep 27 07:19:18 EST 2009

```Hi,

First, we will summarize our current situation (please recall that this
was created by the variegated needs of a string of projects, not by an
apriori planning process!). Then we will pose the question and possible

*Situation:*

1. Over 5 projects and 10 years, we have developed and used different
versions of the instrument.  These versions differed with the inclusion
of more complex items to capitalize on growth in mathematical skill

2. Mostly used for pre-K, we applied it to the Rasch model to garner
Rasch's advantages and to eventually extend it to serve as a single
instrument used in (more recent) longitudinal work. We used "start" (6
correct in row to constitute a base) and "stop" rules (4 wrong in a row
to constitute a ceiling) based on the Rasch ordering.

3. As we added items to allow the test to be used with older children,
we piloted these items first, and then added them to the assessment.
Unfortunately, as we did, some of the theoretical ordering of items at
the higher end of the scale was not consistent with Winstep's difficulty
ordering.  But, crosstabs of students who took both items *did* confirm
the theoretical sequence. We suspected the stop rule was "messing this up."

4. Rasch expert and Winsteps author Mike Linacre agreed and suggested,
"Since the last 5 responses before the "4 wrong" test stops must be
10000, there is a dependency between this response string and the
missing data.  If the last 5 responses for each person are removed from
the estimation of item difficulties, the dependency between the stopping
rule and the item difficulties almost disappears. You may like to
experiment with this in your item-difficulty estimation."

5.  In addition, most of the children assessed so far are within the
questions at the upper end of the test (both a problem with the stop
rule and as function of mathematical skill set).

6. We wish to have the test properly ordered, and are about to collect
new data to fix the ordering. We wish to ensure we do this correctly and
intelligently, so we can (a) accurately score the extant data (on 1000s
of pre-K kids followed through 1st grade), and (b) correctly order the
items for future use of the instrument and so forth.

*Proposed Solution:*

Our thought is that we have adequate numbers of children at the lower
end, and our problem is a less representative sample at the higher end.
We do not wish to test children by giving them all items, our concern
being that there are 100s of items, and we do not wish to frustrate the
lower-ability/younger children nor bore the higher-ability/older children.

Therefore, we propose the following:

A.  Administer the entire upper-end of the test to Kndg. to 2nd grade
children (using the "start rule" or item that worked for our previous
testing) even though it will be bit tedious and frustrating to some
(over multiple sessions as necessary).

B. Combine these new data with our existing data set/ with the last 5
responses eliminated,/ as described previously.

C. Run Winsteps, stacking the data, to determine item difficulty.

D. Treat those item difficulties as fixed/anchors and use those item
difficulties to compute Rasch scores for extant and future data.

*Questions re: This Proposed Solution:*

a.  Are we potentially compounding the problem by continuing to use a
start rule?  Does the fact that our current population of scores are
heavily weighted to the lower portion of the test compensate for the
upcoming data collection focus on the high end?

b.  What is the best way to determine the optimal number of children
needed to round out this portion of the test?

We would really appreciate any insights that the members of this
listserv could provide.

Thanks,

Doug Clements & Julie Sarama

* * *  > > > > * * * < < < <  * * *

SUNY Distinguished Professor / Associate Professor
University at Buffalo, The State University of New York
Department of Learning and Instruction
212 Baldy Hall
Buffalo, NY  14260

E-mail: clements at buffalo.edu            JSarama at buffalo.edu

-------------------------------------------------
Please consider the environment before you print

```