[Rasch] missing data thanks and summary

Fidelman, Carolyn Carolyn.Fidelman at ed.gov
Thu Jan 6 07:18:36 EST 2011


Hi All,

I want to first of all thank everyone who took the time to advise me on this situation. I joined this group as a result of attending IOMW in Boulder last spring and find this group to be just as helpful and interesting in their responses as that conference was interesting to attend.

Off-list I got a response from Larry Ludlow who directed me to his paper

Ludlow LH & O'Leary M. (1999). Omitted and not reached items: Practical data analysis implications. Educational and Psychological Measurement, 59, 615-630.

which is a great primer on this topic.

The problem when you go to look up the usual lit on missing data it applies to randomized-like inferential studies and not psychometrics.  For example, while missing values in this data are definitely NMAR, that is actually a helpful thing. Because of the mode of testing (individual admin) and the type of examinees (young children who are not in it for the high stakes), because we know that the admin protocol calls for the items to be administered sequentially up until the student begins to struggle (as specified in the stop rules), we pretty much know that up to a certain point, an omit is due to the child saying "don't know" or "refusal", rather than "not presented". Also note that the test was not speeded; administrators were directed to give the examinees ample time to answer and administered all items as recommended. Some of the solutions you all offered needed that clarification.

One theme that came up in the responses was "Should one even bother with descriptives at all when the item and person measures are more accurate?" Purists!!  :-)  Well, as someone said, I AM at NCES and descriptives is what we do! Gotta have them regardless. Also, I guess I've always been a big fan of descriptives as they are always the starting point or grounding, from which you move on to do the better analyses. Also some readers will only be able to understand descriptives; you always need to continue to connect with the more general audience.  So yes, I have tables of descriptives planned for this report.

Next question is what to use for p values etc.?  Some favor keeping only with the real data and varying n's by item , while others offer ideas for imputing. Steve presented a scenario with few people answering a certain item, but actually, and maybe I should have said something, this is a huge dataset of 22K or more, so there are always some adequate number of people answering.  Still, I will have to observe the effects of wildly varying n as I get the results.  Also, I really like Mark's approach for imputing values and will try it. I may also try using 0s for those omits as defined above and in the way suggested by Ludlow. In the end, I may report both raw and some form of imputed stats side by side (so people can pick their poison :) .

Then there is the issue of what to use for the Rasch analysis.

Also, in the Ludlow (1999, p.6) article it suggests the following method [I have added coments in brackets]:



Strategy 4: Difficulty and Ability Estimates

Derived From a Two-Phase Procedure

The final strategy uses a stepwise approach to minimize the statistical

effect of unwise student behaviors (omitted items) and adverse testing situations

(not-reached items). First, all instructions to students (both verbal and

written) encourage them to provide an answer to each question in the test. In

DRT and TIMSS test booklets, for example, students were advised to choose

the answer they thought was best even in situations in which they were not

sure of the correct answer. Students should understand that in these types of

testing situations, any answer is better than no answer.



[Criterion met. The admin has control and in all cases is administering all items up until a certain point.]





Second, a two-phase IRT estimation procedure is implemented. During

the first phase, only the item difficulty parameters are estimated. Omitted

responses are scored as incorrect, and not-reached items are treated as not

administered (as done in Strategy 3). This attempts to ensure that regardless

of why the blanks occurred, only legitimate, intentional responses factor into

the item estimation process.



[I might use this approach for my descriptives as well.]



In the second phase, the item calibrations are treated as fixed (or anchored)

from the first phase above. For the sole purpose of student ability estimation,

both omitted and not-reached items are now scored as incorrect. Here, the

negative effect of not following the instructions as closely as possible is

placed squarely on the student.



This is somewhat similar to what Juho suggested in his last post except that he did not score omitted in the item parameter estimation phase one as incorrect, he left them as missing.

I think I have enough to work with now and will try these different ideas and compare.  Again, thank you so much for your responsiveness and help!!


Carolyn


-----------------------------------------------

Carolyn G. Fidelman, Ph.D.
Research Scientist | National Center for Education Statistics
202-502-7312 | carolyn.fidelman at ed.gov | http://nces.ed.gov/


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20110105/bfdedc0d/attachment.html 


More information about the Rasch mailing list