[Rasch] On Best Practices for Rasch Scoring

Andrew Kyngdon AKyngdon at lexile.com
Wed Sep 21 09:37:10 EST 2011


A third scenario was emailed offlist by somebody, which he think it is the best scenario:  He wrote and I quoute:
"The best scenario is the following, I think: Calibration using N=600 (sample A + sample B). This procedure will improve the estimation (good information is always better that less)."

Ceteris paribus, the more data you have the more precise your putative Rasch scale scores will be. If you have any concerns about combining two samples of test data, it is wise to enter "Sample A" and "Sample B" as person factors / facets and check for Differential Item Functioning (DIF). If the DIF is non-uniform (e.g., Sample A's and Sample B's item response functions intersect), then there is a problem with combining the data. If it is uniform, then you can "split" the item (I know RUMM2020/2030 has this feature, but I'm not sure about others). Left uncorrected, uniform DIF will bias your person ability estimates. A simple t test upon the two sets of person ability scale scores (i.e., split and unsplit) can tell you if the means are significantly different.

I've had two recent experiences with this. One was with a vocabulary test administered to two samples - one "Proficient English" and the other "Learning English". The test constructors thought the test was afflicted by "multidimensionality" but in fact the test suffered (as you would expect) from rather severe uniform DIF. Splitting items improved the person estimates considerably. The other was an academic whom combined two samples of managers from different nationalities in a paper. A reviewer stated that this undermined his whole argument. I was able to show that there was no DIF and the combining was justified.

DIF can plausibly have many causes, both quantitative and non-quantitative. A descriptive theory of the psychological system you are investigating should be able to tell you.

Andrew

Andrew Kyngdon, PhD
MetaMetrics, Inc.
www.lexile.com
My website: https://sites.google.com/site/drandrewkyngdon/home
Measurement Forum: http://groups.google.com/group/talking-measurement

From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf Of Juanito Talili
Sent: Tuesday, 20 September 2011 8:41 PM
Subject: [Rasch] On Best Practices for Rasch Scoring

Recently I wrote:
We are using Rasch model to develop our test questionnaire.  The end goal of our tests is to compute scores.  Assuming that after Rasch analysis using n=400 students, I arrived a math test questionnaire consisted of 50 items.  I will now administer the questionaire to another students (say 200) and then compute the scores for each of the students.
Two scenarios:
 Scenario 1: Using n=400, I will save the person measures and item measures, then use such values to compute the scores of the 200 students.
Scenario 2:  I will not use any values from the previous analysis.  Instead, I will just use the data obtained from the 200 students then run directly the winsteps.
 Question:  Which scenario is the best practice in using Rasch Analysis?

A third scenario was emailed offlist by somebody, which he think it is the best scenario:  He wrote and I quoute:
"The best scenario is the following, I think: Calibration using N=600 (sample A + sample B). This procedure will improve the estimation (good information is always better that less)."

Follow-up question:  I think the third scenario says that I have to combined all my data, then pick-up the scores for Sample B.  Is the third scenario methodologically sound?  Can someone suggest for references regarding the third scenario?  My boss always ask for references when I present new things.

Thank you.
Juanito




From: Gerardo Prieto <gprieto at usal.es>
To: talilij at yahoo.com
Sent: Tuesday, September 20, 2011 5:31 PM
Subject: the best scenario
Te best scenario is the following, I think:

Calibration using N=600 (sample A + sample B). This procedure will improve the estimation (good information is always better that less).

Gerardo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20110920/bbdcbad5/attachment.html 


More information about the Rasch mailing list