[Rasch] Rasch: analyze two versions of a test

Agustin Tristan ici_kalt at yahoo.com
Sun Jun 23 05:46:12 EST 2013


I am again facing this problem according to Lucia's question and actual answers:
 
Question: I am doing a research and have method A and method B, which is better?
Answser 1: Do both and compare, if you do not find differences then both methods are the same, if you do find differences then both methods are different.
Answer 2: why do you use methods A and B? try method C.
 
Answer 3 could be: In the next step we can propose method D or any other procedure...and if researcher X has a software then method X is better, or if the agency Y uses method Y then method Y is better...
 
I'm lost.
 
Regards
Agustin
 
 
 
 
 

INSTITUTO DE EVALUACION E INGENIERIA AVANZADA.
Ave. Cordillera Occidental No. 635
Colonia Lomas 4ª San Luis Potosí, San Luis Potosí.
C.P. 78216 MEXICO
(52) (444) 8 25 50 76 / (52) (444) 8 25 50 77 / (52) (444) 8 25 50 78
Página Web (en español): http://www.ieia.com.mx/
Web page (in English): http://www.ieesa-kalt.com/English/Frames_sp_pro.html


From: Rense Lange <rense.lange at gmail.com>
To: rasch at acer.edu.au 
Sent: Saturday, June 22, 2013 10:00 AM
Subject: Re: [Rasch] Rasch: analyze two versions of a test





Is there any way to have two or more raters evaluate the same people on a fairly large scale? If so, you can also check rater effects using Facets … Even if you had only very limited numbers of double/triple/ … ratings, large rater differences/biases would be a sign for caution.

Rense Lange

On Jun 22, 2013, at 8:17 AM, "Bond, Trevor" <trevor.bond at jcu.edu.au> wrote:

Dear Lucia
>You do it both ways, expecting invariance.
>Where you don't, you look for reasons.
>Then choose.
>TGB
>
>
>Sent from 007's iPad
>
>On 22/06/2013, at 12:19 PM, "Lucia Luyten" <Lucia.Luyten at arts.kuleuven.be> wrote:
>
>
>Hi 
>>
>>
>>I have a question about analyzing two versions of a test. 
>>
>>
>>Say we have 130 items for a test. We make two versions of this test. In version A, we put items number 1 to 80 and in version 2 the items number 50 to 130. So items 50-80 are in both versions. In version A, the item numbers 1 to 30 are anchor items from a previous test. For these items, we know and use the measures from a previous Facets analysis. These anchor items (1-30) occur only in version A, not in version B.
>>
>>
>>About 400 candidates take version A, and about 250 take version B. The test is rated by 4 raters. Raters rate both versions, each test taker is rated by one random rater.
>>
>>
>>One might choose to take all candidates together for analysis. Or one can choose to first analyze version A separately (using the measures for the anchor items).  And then use the outcome, i.e. the measures for the identical items (number 50-80) and the measures for the raters, in the subsequent analyses of version B. 
>>
>>
>>Which way of analyzing is preferable and why is it?
>>
>>
>>Kind regards, 
>>
>>
>>Lucia Luyten
>>
>>
>>
>>
>>Lucia Luyten
>>wetenschappelijk medewerker
>>CNaVT / CTO / KULeuven
>>Blijde-Inkomststraat 7 bus 3319
>>3000 Leuven
>>016 32 53 59
>>fax 016 32 53 60
>>lucia.luyten at arts.kuleuven.be
>>
>>http://www.cnavt.org/
>>http://www.cteno.be/
>>
>>
>>
>> 
>>
>_______________________________________________
>>Rasch mailing list
>>Rasch at acer.edu.au
>>Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jcu.edu.au_______________________________________________
>Rasch mailing list
>Rasch at acer.edu.au
>Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/rense.lange%40gmail.com

_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/ici_kalt%40yahoo.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/f284f0a1/attachment-0001.html 


More information about the Rasch mailing list