[Rasch] Rasch model affected by item type or not

Parisa Daftari Fard pdaftaryfard at yahoo.com
Thu Mar 11 19:14:48 EST 2010


Dear Agustin,
 
Hi, It's nice to hear from you.
 
Thank you so much for the link and your explanation. I found them informative. What if Learners ability changes over time, would you again expect to see diverse hierarchical order in you test as your example demonstrated below that inserting different item type would give us different fit for other items. what if the same person changes over time through learning. Do you expect to see item hierarchy to be generic or pragmatic.
 
Best,
Parisa  

--- On Thu, 3/11/10, Agustin Tristan <ici_kalt at yahoo.com> wrote:


From: Agustin Tristan <ici_kalt at yahoo.com>
Subject: Re: [Rasch] Rasch model affected by item type or not
To: "Parisa Daftari Fard" <pdaftaryfard at yahoo.com>, "Rasch" <rasch at acer.edu.au>
Date: Thursday, March 11, 2010, 8:14 AM







Dear Parisa, 
 
For me it is clear that the measure or difficulty of an item is an "intrinsic" or "inherent" property while fit is something "induced" by the sample of persons. That is why I can have an item bank classified by the difficulty of the items (and subject, taxonomic level and other elements), and  the test design must follow some rules for validity.
 
Lets suppose you have this (very small) item bank:
item 1 - difficulty +1.0
item 2 -              +1.5
item 3                -0.2
item 4                +1.3
item 5                -1.1
item 6                 -2.0
item 7                 +0.4
 
and you build a (very small) test [A] with items 1, 2 and 4, you  will have a configuration  different than other test [B] with items 1, 3  and 6. Test difficulty is different and also  you will see that item 1 is the easiest item in test [A], but the hardest in test [B]. So the test configuration changes with the choice of the items. 
The Rasch model will produce persons measures for test [A] and [B] that must be very similar, even under different tests.
In different administrations of the tests, it is expected that item 1 will have a difficulty close to +1.0 (plus/minus the error of the particular group of answers).
Item difficulty is a very "stable" parameter for a focal group, under different conditions of a test, position of the item in the test and other particular situations.
It is clear that if the group is not the focal one (different competencies, gender, region or country, age, etc.) the item calibration may differ from the original expected calibration and you may report DIF among those groups.
For me, that is the reason to design a test with item calibrations, following a design model, not just a random choice of the items. The concept of "Test design line" may be useful (something derived from the design proposed by Wright and Stone).
See:
Wright B.D. & Stone, M.H. (2004) Making measures. The Phaneron Press. Chicago. pp 35-39
Tristan L.A. & Vidal U.R. (2007) Linear model to assess the scale's validity of a test. AERA Meeting, Chicago. Session: New developments in measurement thinking. SIG-Rasch Measurement. Available thru ERIC: 
http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/3d/9f/ce.pdf
 
In addition, it is possible to see that the item type may produce different calibrations for the same ability. I have produced an analysis of difficulty ranges for items of different type: simple multiple choice, ordering or hierarchy, matching columns, cloze, with graphics or only with text, and so forth. The range of difficulties is type-dependent, so if you switch between two items it is important to check that the item calibrations be similar. And the best of this analysis is that the paper is in Spanish! if you wish I can send you a copy of it, just let me now. Probably I have to translate it in the future..

Hope this helps.
Agustin
 
 


 
FAMILIA DE PROGRAMAS KALT. 
Ave. Cordillera Occidental  No. 635
Colonia  Lomas 4ª San Luis Potosí, San Luis Potosí.
C.P.  78216   MEXICO
 (52) (444) 8 25 50 76
 (52) (444) 8 25 50 77
 (52) (444) 8 25 50 78
web page (in Spanish): http://www.ieia.com.mx
web page (in English) http://www.ieesa-kalt.com/English/Frames_sp_pro.html

--- On Wed, 3/10/10, Parisa Daftari Fard <pdaftaryfard at yahoo.com> wrote:


From: Parisa Daftari Fard <pdaftaryfard at yahoo.com>
Subject: [Rasch] Rasch model affected by item type or not
To: "rasch list" <rasch at acer.edu.au>
Date: Wednesday, March 10, 2010, 9:19 PM







Dear Rasch list members,
Hi
 
I came up with a question and I hope to have your reply. Does test configuration (the ordering of the item types and the adding or subtracting the item types) affect Rasch outcome in terms of the hierarchical order in item map? 
 
Does Rasch approach the item generically or pragmatically? by Generic I mean that adding one item type (not the total number but the cognition related item type) would not affect the pattern of items because the statistics is doing something on the nature of the item.
 
I am not sure if I am clear?
 
Best,
Parisa

-----Inline Attachment Follows-----


_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
https://mailinglist.acer.edu.au/mailman/listinfo/rasch




      

-------------------------------------------------
Please consider the environment before you print
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100311/e02a4035/attachment.html 


More information about the Rasch mailing list