[Rasch] Rasch: analyze two versions of a test

rsmith rsmith at jampress.org
Mon Jun 24 02:29:05 EST 2013


Excellent advice Juho!

One of the myths of the achievement testing world is the belief that an item will have the same item difficulty if it is the first or last item on the test, when in effect this is hardly evey true.  So we have to construct our tests to encourage the data to have the invariance properties necessary so equating with the Rasch model produces the must useful result.

Richard M. Smith, Editor
Journal of Applied Measurement
P.O. Box 1283
Maple Grove, MN  55311, USA
website:  www.jampress.org
phone: 763-268-2282
fax: 763-268-2782 

-------- Original Message --------
> From: Dr Juho Looveer <juho.looveer at gmail.com>
> Sent: Sunday, June 23, 2013 1:06 AM
> To: rasch at acer.edu.au
> Subject: [Rasch] Rasch: analyze two versions of a test
> 
> Lucia, 
> It looks like the common items may be the last 30 in one test and the first
> 30 in the other.
> if not too late, you might check that the common items are spread in roughly
> similar positions in both tests.
> 
> Regards
> 
> Juho
> 
> 
> 
> Dr Juho Looveer
> Australia
> 
> -----Original Message-----
> From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
> Of rasch-request at acer.edu.au
> Sent: Sunday, June 23, 2013 12:00 PM
> To: rasch at acer.edu.au
> Subject: Rasch Digest, Vol 95, Issue 9
> 
> Send Rasch mailing list submissions to
> 	rasch at acer.edu.au
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> or, via email, send a message with subject or body 'help' to
> 	rasch-request at acer.edu.au
> 
> You can reach the person managing the list at
> 	rasch-owner at acer.edu.au
> 
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of Rasch digest..."
> 
> 
> Today's Topics:
> 
>    1. Rasch: analyze two versions of a test (Lucia Luyten)
>    2. Re: Rasch: analyze two versions of a test (Bond, Trevor)
>    3. Re: Rasch: analyze two versions of a test (Rense Lange)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Sat, 22 Jun 2013 10:18:27 +0000
> From: Lucia Luyten <Lucia.Luyten at arts.kuleuven.be>
> Subject: [Rasch] Rasch: analyze two versions of a test
> To: "rasch at acer.edu.au" <rasch at acer.edu.au>
> Message-ID:
> 	
> <BB8B05E9D328B04D88B8D72DD963D85D37FBA142 at ICTS-S-MBX7.luna.kuleuven.be>
> 	
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi
> 
> I have a question about analyzing two versions of a test.
> 
> Say we have 130 items for a test. We make two versions of this test. In
> version A, we put items number 1 to 80 and in version 2 the items number 50
> to 130. So items 50-80 are in both versions. In version A, the item numbers
> 1 to 30 are anchor items from a previous test. For these items, we know and
> use the measures from a previous Facets analysis. These anchor items (1-30)
> occur only in version A, not in version B.
> 
> About 400 candidates take version A, and about 250 take version B. The test
> is rated by 4 raters. Raters rate both versions, each test taker is rated by
> one random rater.
> 
> One might choose to take all candidates together for analysis. Or one can
> choose to first analyze version A separately (using the measures for the
> anchor items).  And then use the outcome, i.e. the measures for the
> identical items (number 50-80) and the measures for the raters, in the
> subsequent analyses of version B.
> 
> Which way of analyzing is preferable and why is it?
> 
> Kind regards,
> 
> Lucia Luyten
> 
> 
> ________________________________
> Lucia Luyten
> wetenschappelijk medewerker
> CNaVT / CTO / KULeuven
> Blijde-Inkomststraat 7 bus 3319
> 3000 Leuven
> 016 32 53 59
> fax 016 32 53 60
> lucia.luyten at arts.kuleuven.be<mailto:lucia.luyten at arts.kuleuven.be>
> http://www.cnavt.org
> www.cteno.be<http://www.cteno.be>
> 
> 
> 
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/1b502e6
> b/attachment-0001.html 
> 
> ------------------------------
> 
> Message: 2
> Date: Sat, 22 Jun 2013 13:17:56 +0000
> From: "Bond, Trevor" <trevor.bond at jcu.edu.au>
> Subject: Re: [Rasch] Rasch: analyze two versions of a test
> To: "<rasch at acer.edu.au>" <rasch at acer.edu.au>
> Message-ID: <EB8A8EA9-84C6-424F-981B-FBC8E29FA722 at jcu.edu.au>
> Content-Type: text/plain; charset="us-ascii"
> 
> Dear Lucia
> You do it both ways, expecting invariance.
> Where you don't, you look for reasons.
> Then choose.
> TGB
> 
> 
> Sent from 007's iPad
> 
> On 22/06/2013, at 12:19 PM, "Lucia Luyten"
> <Lucia.Luyten at arts.kuleuven.be<mailto:Lucia.Luyten at arts.kuleuven.be>> wrote:
> 
> Hi
> 
> I have a question about analyzing two versions of a test.
> 
> Say we have 130 items for a test. We make two versions of this test. In
> version A, we put items number 1 to 80 and in version 2 the items number 50
> to 130. So items 50-80 are in both versions. In version A, the item numbers
> 1 to 30 are anchor items from a previous test. For these items, we know and
> use the measures from a previous Facets analysis. These anchor items (1-30)
> occur only in version A, not in version B.
> 
> About 400 candidates take version A, and about 250 take version B. The test
> is rated by 4 raters. Raters rate both versions, each test taker is rated by
> one random rater.
> 
> One might choose to take all candidates together for analysis. Or one can
> choose to first analyze version A separately (using the measures for the
> anchor items).  And then use the outcome, i.e. the measures for the
> identical items (number 50-80) and the measures for the raters, in the
> subsequent analyses of version B.
> 
> Which way of analyzing is preferable and why is it?
> 
> Kind regards,
> 
> Lucia Luyten
> 
> 
> ________________________________
> Lucia Luyten
> wetenschappelijk medewerker
> CNaVT / CTO / KULeuven
> Blijde-Inkomststraat 7 bus 3319
> 3000 Leuven
> 016 32 53 59
> fax 016 32 53 60
> lucia.luyten at arts.kuleuven.be<mailto:lucia.luyten at arts.kuleuven.be>
> http://www.cnavt.org
> www.cteno.be<http://www.cteno.be>
> 
> 
> 
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au<mailto:Rasch at acer.edu.au>
> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jcu.edu.
> au
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/43e7e66
> e/attachment-0001.html 
> 
> ------------------------------
> 
> Message: 3
> Date: Sat, 22 Jun 2013 10:00:14 -0500
> From: Rense Lange <rense.lange at gmail.com>
> Subject: Re: [Rasch] Rasch: analyze two versions of a test
> To: <rasch at acer.edu.au>
> Message-ID: <A69FA871-D198-46BA-B8B9-EE3FF6E02140 at gmail.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> 
> Is there any way to have two or more raters evaluate the same people on a
> fairly large scale? If so, you can also check rater effects using Facets ?
> Even if you had only very limited numbers of double/triple/ ? ratings, large
> rater differences/biases would be a sign for caution.
> 
> Rense Lange
> 
> On Jun 22, 2013, at 8:17 AM, "Bond, Trevor" <trevor.bond at jcu.edu.au> wrote:
> 
> > Dear Lucia
> > You do it both ways, expecting invariance.
> > Where you don't, you look for reasons.
> > Then choose.
> > TGB
> > 
> > 
> > Sent from 007's iPad
> > 
> > On 22/06/2013, at 12:19 PM, "Lucia Luyten" <Lucia.Luyten at arts.kuleuven.be>
> wrote:
> > 
> >> Hi
> >> 
> >> I have a question about analyzing two versions of a test. 
> >> 
> >> Say we have 130 items for a test. We make two versions of this test. In
> version A, we put items number 1 to 80 and in version 2 the items number 50
> to 130. So items 50-80 are in both versions. In version A, the item numbers
> 1 to 30 are anchor items from a previous test. For these items, we know and
> use the measures from a previous Facets analysis. These anchor items (1-30)
> occur only in version A, not in version B.
> >> 
> >> About 400 candidates take version A, and about 250 take version B. The
> test is rated by 4 raters. Raters rate both versions, each test taker is
> rated by one random rater.
> >> 
> >> One might choose to take all candidates together for analysis. Or one can
> choose to first analyze version A separately (using the measures for the
> anchor items).  And then use the outcome, i.e. the measures for the
> identical items (number 50-80) and the measures for the raters, in the
> subsequent analyses of version B. 
> >> 
> >> Which way of analyzing is preferable and why is it?
> >> 
> >> Kind regards,
> >> 
> >> Lucia Luyten
> >> 
> >> 
> >> Lucia Luyten
> >> wetenschappelijk medewerker
> >> CNaVT / CTO / KULeuven
> >> Blijde-Inkomststraat 7 bus 3319
> >> 3000 Leuven
> >> 016 32 53 59
> >> fax 016 32 53 60
> >> lucia.luyten at arts.kuleuven.be
> >> http://www.cnavt.org
> >> 
> >> www.cteno.be
> >> 
> >> 
> >> 
> >>  
> >>  
> >> _______________________________________________
> >> Rasch mailing list
> >> Rasch at acer.edu.au
> >> Unsubscribe: 
> >> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40j
> >> cu.edu.au
> > _______________________________________________
> > Rasch mailing list
> > Rasch at acer.edu.au
> > Unsubscribe: 
> > https://mailinglist.acer.edu.au/mailman/options/rasch/rense.lange%40gm
> > ail.com
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/ee34427
> f/attachment-0001.html 
> 
> ------------------------------
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> 
> 
> End of Rasch Digest, Vol 95, Issue 9
> ************************************
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/rsmith%40jampress.org 





More information about the Rasch mailing list