[Rasch] Rasch: analyze two versions of a test

Yahoo ici_kalt at yahoo.com
Tue Jun 25 00:12:19 EST 2013


Hello Juho, thank you for so complete answer. What you say means that the data base of an item bank must not only include the item difficulty and fit to the Rasch model but the relative position of the item in connection to the other items in the bank. This sounds that we're proposing a test bank instead of an item bank or we're saying that the item measure is not independent of the sample of the items used in the test.
In a test bank it is true we have to be aware of the validity and objectivity of the specific set of items used on the test and if we change one item we must verify all the specifications, relations with the other items, invariance, equivalence of test forms and so forth.
In an item bank It is supposed that we're not involved in those problems because the items are free to be selected for the test design according to the test blueprint.

Am I losing something?
Regards
Agustin

Enviado desde mi iPhone

El 24/06/2013, a las 06:44 a.m., Dr Juho Looveer <juho.looveer at gmail.com> escribió:

> Augustin  et al
> 
> We aim for local independence, but I think in many tests, some items trigger
> memory and ideas that can be useful for other items.
> e.g. in a secondary school mathematics test, the same algebra skills are
> required to manipulate formulae, to solve equations, for use of formulae
> (substitute and evaluate), etc.
> I am sure that sometimes there are some triggering effects from some items
> to others.
> Hence, it would be preferable to have items appear in a similar order, in
> similar positions, etc.
> Good test making includes balancing the items in a test according to
> content, skills, order, etc; not just throwing the whatever number of items
> in any order that happens to fit a page.
> 
> But you are correct, using CAT, this is a different issue.
> Nit sure if anyone has explored it yet.
> However, using good reasoning, we can consider the situation, and perhaps
> items should be grouped in some way as well, to indicate those to be early
> in a test, or late in  test, or near the middle.
> 
> One difference is that with pen and paper tests, the test taker can go back
> and revise a response if  they have another thought about that item; so the
> order of items can be countered by a student who is prompted for one item by
> their working out another item. I am not sure that this is possible with
> CAT.
> 
> Regards
> 
> Juho
> 
> 
> 
> Dr Juho Looveer
> Australia
> 
> -----Original Message-----
> From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
> Of rasch-request at acer.edu.au
> Sent: Monday, June 24, 2013 9:28 PM
> To: rasch at acer.edu.au
> Subject: Rasch Digest, Vol 95, Issue 13
> 
> Send Rasch mailing list submissions to
>    rasch at acer.edu.au
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>    https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> or, via email, send a message with subject or body 'help' to
>    rasch-request at acer.edu.au
> 
> You can reach the person managing the list at
>    rasch-owner at acer.edu.au
> 
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of Rasch digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: Rasch: analyze two versions of a test (Agustin Tristan)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Sun, 23 Jun 2013 16:30:26 -0700
> From: Agustin Tristan <ici_kalt at yahoo.com>
> Subject: Re: [Rasch] Rasch: analyze two versions of a test
> To: "rasch at acer.edu.au" <rasch at acer.edu.au>
> Message-ID:
>    <1372030226.62614.YahooMailNeo at web163405.mail.gq1.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hello!
> What if the test is not in a paper and pencil format but it is administered
> using?CAT? Do the items need to appear in the same invariant order in every
> CAT version if difficulty has to be invariant too?
> Agustin
> ?
> 
> INSTITUTO DE EVALUACION E INGENIERIA AVANZADA.
> Ave. Cordillera Occidental No. 635
> Colonia Lomas 4? San Luis Potos?, San Luis Potos?.
> C.P. 78216 MEXICO
> (52) (444) 8 25 50 76 / (52) (444) 8 25 50 77 / (52) (444) 8 25 50 78 P?gina
> Web (en espa?ol): http://www.ieia.com.mx/ Web page (in English):
> http://www.ieesa-kalt.com/English/Frames_sp_pro.html
> 
> 
> From: rsmith <rsmith at jampress.org>
> To: rasch at acer.edu.au
> Sent: Sunday, June 23, 2013 11:29 AM
> Subject: Re: [Rasch] Rasch: analyze two versions of a test
> 
> 
> Excellent advice Juho!
> 
> One of the myths of the achievement testing world is the belief that an item
> will have the same item difficulty if it is the first or last item on the
> test, when in effect this is hardly evey true.? So we have to construct our
> tests to encourage the data to have the invariance properties necessary so
> equating with the Rasch model produces the must useful result.
> 
> Richard M. Smith, Editor
> Journal of Applied Measurement
> P.O. Box 1283
> Maple Grove, MN? 55311, USA
> website:? www.jampress.org
> phone: 763-268-2282
> fax: 763-268-2782 
> 
> -------- Original Message --------
>> From: Dr Juho Looveer <juho.looveer at gmail.com>
>> Sent: Sunday, June 23, 2013 1:06 AM
>> To: rasch at acer.edu.au
>> Subject: [Rasch] Rasch: analyze two versions of a test
>> 
>> Lucia, 
>> It looks like the common items may be the last 30 in one test and the
> first
>> 30 in the other.
>> if not too late, you might check that the common items are spread in
> roughly
>> similar positions in both tests.
>> 
>> Regards
>> 
>> Juho
>> 
>> 
>> 
>> Dr Juho Looveer
>> Australia
>> 
>> -----Original Message-----
>> From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On
> Behalf
>> Of rasch-request at acer.edu.au
>> Sent: Sunday, June 23, 2013 12:00 PM
>> To: rasch at acer.edu.au
>> Subject: Rasch Digest, Vol 95, Issue 9
>> 
>> Send Rasch mailing list submissions to
>> ??? rasch at acer.edu.au
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> ??? https://mailinglist.acer.edu.au/mailman/listinfo/rasch
>> or, via email, send a message with subject or body 'help' to
>> ??? rasch-request at acer.edu.au
>> 
>> You can reach the person managing the list at
>> ??? rasch-owner at acer.edu.au
>> 
>> When replying, please edit your Subject line so it is more specific than
>> "Re: Contents of Rasch digest..."
>> 
>> 
>> Today's Topics:
>> 
>> ? ? 1. Rasch: analyze two versions of a test (Lucia Luyten)
>> ? ? 2. Re: Rasch: analyze two versions of a test (Bond, Trevor)
>> ? ? 3. Re: Rasch: analyze two versions of a test (Rense Lange)
>> 
>> 
>> ----------------------------------------------------------------------
>> 
>> Message: 1
>> Date: Sat, 22 Jun 2013 10:18:27 +0000
>> From: Lucia Luyten <Lucia.Luyten at arts.kuleuven.be>
>> Subject: [Rasch] Rasch: analyze two versions of a test
>> To: "rasch at acer.edu.au" <rasch at acer.edu.au>
>> Message-ID:
>> ??? 
>> <BB8B05E9D328B04D88B8D72DD963D85D37FBA142 at ICTS-S-MBX7.luna.kuleuven.be>
>> ??? 
>> Content-Type: text/plain; charset="iso-8859-1"
>> 
>> Hi
>> 
>> I have a question about analyzing two versions of a test.
>> 
>> Say we have 130 items for a test. We make two versions of this test. In
>> version A, we put items number 1 to 80 and in version 2 the items number
> 50
>> to 130. So items 50-80 are in both versions. In version A, the item
> numbers
>> 1 to 30 are anchor items from a previous test. For these items, we know
> and
>> use the measures from a previous Facets analysis. These anchor items
> (1-30)
>> occur only in version A, not in version B.
>> 
>> About 400 candidates take version A, and about 250 take version B. The
> test
>> is rated by 4 raters. Raters rate both versions, each test taker is rated
> by
>> one random rater.
>> 
>> One might choose to take all candidates together for analysis. Or one can
>> choose to first analyze version A separately (using the measures for the
>> anchor items).? And then use the outcome, i.e. the measures for the
>> identical items (number 50-80) and the measures for the raters, in the
>> subsequent analyses of version B.
>> 
>> Which way of analyzing is preferable and why is it?
>> 
>> Kind regards,
>> 
>> Lucia Luyten
>> 
>> 
>> ________________________________
>> Lucia Luyten
>> wetenschappelijk medewerker
>> CNaVT / CTO / KULeuven
>> Blijde-Inkomststraat 7 bus 3319
>> 3000 Leuven
>> 016 32 53 59
>> fax 016 32 53 60
>> lucia.luyten at arts.kuleuven.be<mailto:lucia.luyten at arts.kuleuven.be>
>> http://www.cnavt.org/
>> www.cteno.be<http://www.cteno.be/>
>> 
>> 
>> 
>> 
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/1b502e6
>> b/attachment-0001.html 
>> 
>> ------------------------------
>> 
>> Message: 2
>> Date: Sat, 22 Jun 2013 13:17:56 +0000
>> From: "Bond, Trevor" <trevor.bond at jcu.edu.au>
>> Subject: Re: [Rasch] Rasch: analyze two versions of a test
>> To: "<rasch at acer.edu.au>" <rasch at acer.edu.au>
>> Message-ID: <EB8A8EA9-84C6-424F-981B-FBC8E29FA722 at jcu.edu.au>
>> Content-Type: text/plain; charset="us-ascii"
>> 
>> Dear Lucia
>> You do it both ways, expecting invariance.
>> Where you don't, you look for reasons.
>> Then choose.
>> TGB
>> 
>> 
>> Sent from 007's iPad
>> 
>> On 22/06/2013, at 12:19 PM, "Lucia Luyten"
>> <Lucia.Luyten at arts.kuleuven.be<mailto:Lucia.Luyten at arts.kuleuven.be>>
> wrote:
>> 
>> Hi
>> 
>> I have a question about analyzing two versions of a test.
>> 
>> Say we have 130 items for a test. We make two versions of this test. In
>> version A, we put items number 1 to 80 and in version 2 the items number
> 50
>> to 130. So items 50-80 are in both versions. In version A, the item
> numbers
>> 1 to 30 are anchor items from a previous test. For these items, we know
> and
>> use the measures from a previous Facets analysis. These anchor items
> (1-30)
>> occur only in version A, not in version B.
>> 
>> About 400 candidates take version A, and about 250 take version B. The
> test
>> is rated by 4 raters. Raters rate both versions, each test taker is rated
> by
>> one random rater.
>> 
>> One might choose to take all candidates together for analysis. Or one can
>> choose to first analyze version A separately (using the measures for the
>> anchor items).? And then use the outcome, i.e. the measures for the
>> identical items (number 50-80) and the measures for the raters, in the
>> subsequent analyses of version B.
>> 
>> Which way of analyzing is preferable and why is it?
>> 
>> Kind regards,
>> 
>> Lucia Luyten
>> 
>> 
>> ________________________________
>> Lucia Luyten
>> wetenschappelijk medewerker
>> CNaVT / CTO / KULeuven
>> Blijde-Inkomststraat 7 bus 3319
>> 3000 Leuven
>> 016 32 53 59
>> fax 016 32 53 60
>> lucia.luyten at arts.kuleuven.be<mailto:lucia.luyten at arts.kuleuven.be>
>> http://www.cnavt.org/
>> www.cteno.be<http://www.cteno.be/>
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Rasch mailing list
>> Rasch at acer.edu.au<mailto:Rasch at acer.edu.au>
>> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jcu.edu.
>> au
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/43e7e66
>> e/attachment-0001.html 
>> 
>> ------------------------------
>> 
>> Message: 3
>> Date: Sat, 22 Jun 2013 10:00:14 -0500
>> From: Rense Lange <rense.lange at gmail.com>
>> Subject: Re: [Rasch] Rasch: analyze two versions of a test
>> To: <rasch at acer.edu.au>
>> Message-ID: <A69FA871-D198-46BA-B8B9-EE3FF6E02140 at gmail.com>
>> Content-Type: text/plain; charset="windows-1252"
>> 
>> 
>> Is there any way to have two or more raters evaluate the same people on a
>> fairly large scale? If so, you can also check rater effects using Facets ?
>> Even if you had only very limited numbers of double/triple/ ? ratings,
> large
>> rater differences/biases would be a sign for caution.
>> 
>> Rense Lange
>> 
>> On Jun 22, 2013, at 8:17 AM, "Bond, Trevor" <trevor.bond at jcu.edu.au>
> wrote:
>> 
>>> Dear Lucia
>>> You do it both ways, expecting invariance.
>>> Where you don't, you look for reasons.
>>> Then choose.
>>> TGB
>>> 
>>> 
>>> Sent from 007's iPad
>>> 
>>> On 22/06/2013, at 12:19 PM, "Lucia Luyten"
> <Lucia.Luyten at arts.kuleuven.be>
>> wrote:
>>> 
>>>> Hi
>>>> 
>>>> I have a question about analyzing two versions of a test. 
>>>> 
>>>> Say we have 130 items for a test. We make two versions of this test. In
>> version A, we put items number 1 to 80 and in version 2 the items number
> 50
>> to 130. So items 50-80 are in both versions. In version A, the item
> numbers
>> 1 to 30 are anchor items from a previous test. For these items, we know
> and
>> use the measures from a previous Facets analysis. These anchor items
> (1-30)
>> occur only in version A, not in version B.
>>>> 
>>>> About 400 candidates take version A, and about 250 take version B. The
>> test is rated by 4 raters. Raters rate both versions, each test taker is
>> rated by one random rater.
>>>> 
>>>> One might choose to take all candidates together for analysis. Or one
> can
>> choose to first analyze version A separately (using the measures for the
>> anchor items).? And then use the outcome, i.e. the measures for the
>> identical items (number 50-80) and the measures for the raters, in the
>> subsequent analyses of version B. 
>>>> 
>>>> Which way of analyzing is preferable and why is it?
>>>> 
>>>> Kind regards,
>>>> 
>>>> Lucia Luyten
>>>> 
>>>> 
>>>> Lucia Luyten
>>>> wetenschappelijk medewerker
>>>> CNaVT / CTO / KULeuven
>>>> Blijde-Inkomststraat 7 bus 3319
>>>> 3000 Leuven
>>>> 016 32 53 59
>>>> fax 016 32 53 60
>>>> lucia.luyten at arts.kuleuven.be
>>>> http://www.cnavt.org/
>>>> 
>>>> www.cteno.be
>>>> 
>>>> 
>>>> 
>>>> ? 
>>>> ? 
>>>> _______________________________________________
>>>> Rasch mailing list
>>>> Rasch at acer.edu.au
>>>> Unsubscribe: 
>>>> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40j
>>>> cu.edu.au
>>> _______________________________________________
>>> Rasch mailing list
>>> Rasch at acer.edu.au
>>> Unsubscribe: 
>>> https://mailinglist.acer.edu.au/mailman/options/rasch/rense.lange%40gm
>>> ail.com
>> 
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130622/ee34427
>> f/attachment-0001.html 
>> 
>> ------------------------------
>> 
>> _______________________________________________
>> Rasch mailing list
>> Rasch at acer.edu.au
>> https://mailinglist.acer.edu.au/mailman/listinfo/rasch
>> 
>> 
>> End of Rasch Digest, Vol 95, Issue 9
>> ************************************
>> 
>> _______________________________________________
>> Rasch mailing list
>> Rasch at acer.edu.au
>> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/rsmith%40jampress.org
> 
> 
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/ici_kalt%40yahoo.com
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20130623/e2c08b4
> 9/attachment.html 
> 
> ------------------------------
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> 
> 
> End of Rasch Digest, Vol 95, Issue 13
> *************************************
> 
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/ici_kalt%40yahoo.com


More information about the Rasch mailing list