[Rasch] Item ordered by difficulty?

Paul Barrett paul at pbarrett.net
Tue Aug 3 07:57:47 EST 2010


 

 

-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
Of Donald Bacon
Sent: Sunday, 1 August 2010 4:20 a.m.
To: rasch at acer.edu.au
Subject: [Rasch] Item ordered by difficulty?

 

Hi all --

   Can anyone send me some good citations for work on the effect of item
order on test performance?  Specifically, the difference between ordering by
increasing difficulty, decreasing difficulty, or randomly?

 

Thanks --

 

Don Bacon

Professor of Marketing

 

  _____  

Hello Don

 

Try:

 

Moses, T., Yang, W-L., & Wilson, C. (2007) Using kernel equating to assess
item order effects on test scores. Journal of Educational Measurement, 44,
2, 157-178.

 

Hohensinn, C., Kubinger, K.D., Reif, M., Holocher-Ertle, S., Khorramdel, L.,
& Frebort, M. (2008) Examining item-position effects in large-scale
assessment using the Linear Logistic Test Model. Psychology Science
Quarterly, 50, 3, 391-402.

 

Kubinger, K.D. (2008) On the revival of the Rasch model-based LLTM: From
constructing tests using item generating rules to measuring item
administration effects. Psychology Science Quarterly, 50, 3, 311-327.

 

Weinberger, A.H., Darkes, J., Del Boca, K., Greenbaum, P.E., & Goldman, M.S.
(2006) Items as context: Effects of item order and ambiguity on factor
structure. Basic and Applied Social Psychology, 28, 1, 17-26.

 

 

With regard to adaptive testing and order effects .. try ...

 

Ortner, T.M. (2008) Effects of Changed Item Order: A cautionary note to
practitioners on jumping to computerized adaptive testing for personality
assessment. International Journal of Selection and Assessment, 16, 3,
249-257.

 

It might also be worth reading:

Wang, T., & Kolen, M.J. (2001) Evaluating comparability in computerized
adaptive testing: issues, criteria, and an example. Journal of Educational
Measurement, 38, 1, 19-49.

 

 

For rather obvious psychological reasons, starting an ability test with
anything but the very easiest items is unwise - which puts a big question
mark over many IRT-based adaptive tests which insist on starting with
mid-difficulty items; this is a recipe for inducing slight panic and
disillusionment in less-able candidates, invoking extra
anxiety/stress/neurophysiological arousal with the subsequent effects these
have on cognitive performance. 'Test-Taker confidence', 'fear of failure',
and 'stereotypical threat' are important concepts to bear in mind, along
with item ordering, when presenting any test which is being used to
make/form decisions about a candidate. 

 

Anyway, the papers above contain many other references .. 

 

Regards .. Paul

 

pbarrett_net

 

W:  <http://www.pbarrett.net/> www.pbarrett.net 

E:  <mailto:paul at pbarrett.net> paul at pbarrett.net 

M: +64-(0)21-415625

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100803/2a04838d/attachment.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 2708 bytes
Desc: not available
Url : https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100803/2a04838d/attachment.gif 


More information about the Rasch mailing list