[Rasch] A practical one

David Andrich D.Andrich at murdoch.edu.au
Tue Oct 11 14:43:06 EST 2005


To assess effects of memory, it is necessary to test shortly after.  We
tested this on Raven's progressive matrices two weeks later. There was a
memory effect when raw scores were calculated. But as always, things are
not that simple. On closer look, the practice effect helped students
answer items that were easier correctly, that is, they made less what
might be seen as genuinely experience with the format etc errors.
However, they did not go beyond their original level in terms of
answering more difficult items correctly.  So to assess this in any test
or situation, a little experiment must be conducted - there will be no
universal answer.
David
 
 

David Andrich, BSc, MEd (UWA); PhD(Chic), FASSA
Professor, School of Education 
Murdoch University 
Murdoch, Western Australia 6150 
Email: andrich at murdoch.edu.au

Phone +61 8 9360 2245 
Fax +61 8 93606280 
 http://www.education.murdoch.edu.au/educ_RaschCourse2005.html
<http://www.education.murdoch.edu.au/educ_RaschCourse2005.html> 

 

	-----Original Message-----
	From: rasch-bounces at acer.edu.au
[mailto:rasch-bounces at acer.edu.au] On Behalf Of Stone, Gregory
	Sent: Monday, 10 October 2005 9:15 PM
	To: Trevor Bond; Looveer, Juho; John Barnard (EPEC);
rasch-bounces at acer.edu.au
	Cc: Rasch listserve
	Subject: RE: [Rasch] A practical one
	
	

	On a practical note - I would wonder about the time between the
first examination and the second.  It seems evident that when a
reasonable time elapses, memory fades.  Indeed, several suggestions have
been made that while candidates can in fact honestly memorize one or
maybe two items, the rest are lost.  Either they remember different
choices, different wording, etc. They ultimately fail to use the
"remembered" information to their advantage.
	
	My own experience is such that if at least 6 months pass between
time 1 and time 2, there is virtually no learning or improvement unless
a concerted effort is made to increase skills.  As an example, in one of
my certification examinations we have used the same form 8 times in a
row (twice per year).  There is no perceptable difference in first time
test taker passes.  The rate of passage for the second time takers
continues lower than first timers and is largely identical to their
initial peformances with a few minor increases.  After the second time,
the probability of passing plunges into the terrible odds category
(rather like the odds of GW Bush winning the Nobel Peace Prize).  As
this is a high-stakes test, it is not taken lightly.
	
	Apart from time intervals, perhaps it is a semantic difference,
but what sort of "paper" is it?  Multiple choice? Essay?  This too would
have an impact, as I would suspect essays are much easier to remember
than MCQs.
	
	Gregory
	
	Gregory E. Stone, Ph.D., M.A.
	Assistant Professor, Research and Measurement
	University of Toledo, College of Education, Mailstop #923
	
	
	
	
	-----Original Message-----
	From: rasch-bounces at acer.edu.au on behalf of Trevor Bond
	Sent: Mon 10/10/2005 5:17 AM
	To: Looveer, Juho; John Barnard (EPEC);
rasch-bounces at acer.edu.au
	Cc: Rasch listserve
	Subject: RE: [Rasch] A practical one
	
	It would also be interesting to know how each repeating person's
	estimate has changed? do they still fit the model? are ther some
	questions now answered which are well beyond past / present
	capability?
	
	Juho's suggestion has merit: In that case, scale results for the
	non-repeaters according to all items, and "equate" scores of
	repeaters using only the unknown questions.
	
	best wishes
	T
	
	  At 6:50 PM +1000 10/10/05, Looveer, Juho wrote:
	>You are assuming that the repeaters have an advantage simply by
	>having done the same test before.  I have seen many teenagers
who do
	>the same driving test several times but continue to fail,
despite
	>knowing what will be expected of them.
	>So, Have the repeaters scored better because they have advance
	>knowledge of the tasks that have been set and thus these tasks
no
	>longer are good indicators of knowledge across the domain?  Or
has
	>their ability/performance actually improved - perhaps they were
not
	>as well prepared for the test in the first instance?
	>
	>
	>However, assuming that you have considered all this and have
reached
	>the correct conclusion, then you now have a pragmatic problem:
how
	>to partial out the different effects of advantage due to
knowing
	>specific test questions and actual improvement in knowledge.
This
	>may or may not be possible. Can you be sure which questions the
	>repeaters may have remembered? If you can't be sure, then
perhaps
	>option 1 is the way to go.
	>
	>Since there are repeaters, I assume that the test has
previously
	>been calibrated?  In that case, scale results for the
non-repeaters
	>according to all items, and "equate" scores of repeaters using
only
	>the unknown questions.
	>
	>However if you are dealing with heart surgeons or commercial
pilots,
	>I would urge that another test must be set and used.
	>
	>Juho Looveer
	>
	>________________________________
	>
	>From: John Barnard (EPEC) [mailto:JohnBarnard at bigpond.com]
	>Sent: Mon 10/10/2005 5:10 PM
	>To: Looveer, Juho
	>Subject: RE: [Rasch] A practical one
	>
	>
	>
	>Dear Juho, Trevor...
	>
	>Many thanks for your useful comments. What you are saying is of
course
	>true. However, I think your last para Juho is the crux. (Is the
problem
	>the use of the same question in successive tests?) - not just
question,
	>but whole paper. This is the scenario and data I was given to
analyze.
	>If the difference between the first and second administrations
was not
	>as significant for the repeat group as found, I could easier
accept it.
	>But, my main concern is, given this significant "increase" in
	>performance for the repeat group and hardly any difference
between the
	>two non-repeat groups (round 1 and 2), was the repeat group not
unfairly
	>advantaged? They failed the first round, but because they had
some
	>knowledge (seen the items before) had an advantage over the
non-repeat
	>group.
	>
	>We all know that this should not happen (other than link items
used for
	>equating, etc.) and that the repeat group could have put in an
extra
	>effort, etc., but were they advantaged having knowledge that
others
	>didn't as to what to focus on? My results suggest this.
	>
	>I guess there are 2 (or maybe more) options:
	>1. Assume that the repeaters now have the knowledge (hopefully
also in
	>other aspects not directly examined) and use their scores
without any
	>adjustment.
	>2. After making sure that apples are compared with apples
(common scale,
	>etc) subtract the "advantage" from the repeat group's ability
estimates
	>and then assume that they are directly comparable to the
non-repeat
	>group.
	>
	>Kindly
	>John
	>
	>John J Barnard
	>Executive Director: EPEC Pty Ltd
	>www.users.bigpond.com/JohnBarnard/
	>
	>DISCLAIMER:
	>The contents of this e-mail which may include one or more
attachments,
	>is confidential and is intended for the use of the named
recipient(s).
	>If you have received this e-mail in error, you are not
permitted to and
	>must not disclose, distribute or retain it, and are requested
to notify
	>the sender by return e-mail and delete it thereafter.
	>
	>It is the responsibility of the recipient(s) to ensure that the
e-mail
	>is virus free. Although EPEC uses the latest antiviral
software, we do
	>not accept responsibility for any problems caused by viruses.
	>
	>-----Original Message-----
	>From: Looveer, Juho [mailto:Juho.Looveer at det.nsw.edu.au]
	>Sent: Monday, 10 October 2005 4:21 PM
	>To: John Barnard (EPEC); Trevor Bond; Rasch list
	>Subject: RE: [Rasch] A practical one
	>
	>
	>As Trevor has said, what is the point of the test?
	>
	>If it is to achieve competence (e.g. flying an aircraft, etc),
then the
	>test can be part of the learning experience.  By having failed
on some
	>aspects of the test previously, the testee has gone away and
	>learned/studied/practiced that aspect, and is now competent on
it.
	>Objective achieved - that person has more competence than
someone who
	>cannot demonstrate their skill in that area.
	>
	>In most academic tests, students memorise material.  Even a
doctor will
	>have practiced many medical routines (hopefully on cadavers)
before they
	>are let loose on live patients. In academic tests, students
will usually
	>have undertaken practice tests or assignments, as part of their
	>preparation.
	>
	>But, that is also what happens in many other situations - a
driving test
	>will each time check that the candidate can undertake a certain
minimum
	>set of core competencies. For a piano exam, everyone knows
beforehand
	>what pieces they will have to play and what skills they need to
	>demonstrate.
	>
	>
	>Is the purpose to assess whether someone has achieved some
	>knowledge/skill/understanding, or to assess who can achieve
this in the
	>least number of attempts. In this case, are we really assessing
the
	>competence/skill, or aptitude for the skill (ie a combination
of the
	>skill and number of attempts at the test)?
	>
	>
	>Is the problem the use of the same question in successive
tests?
	>This could then be a defect in test design - where some
candidates are
	>given a "step up" or an unfair advantage.  This is why Computer
Adaptive
	>Testing relies on a large pool of items - trying to avoid
candidates
	>getting an unfair advantage of knowing the questions they will
see.
	>
	>
	>
	>Dr Juho Looveer
	>Sydney NSW
	>work phone: 956 18192
	>fax: 956 18055
	>Juho.Looveer at det.nsw.edu.au
	>
	>
	>-----Original Message-----
	>From: rasch-bounces at acer.edu.au
[mailto:rasch-bounces at acer.edu.au] On
	>Behalf Of John Barnard (EPEC)
	>Sent: Monday, 10 October 2005 2:59 PM
	>To: 'Trevor Bond'; 'Rasch list'
	>Subject: RE: [Rasch] A practical one
	>
	>Of course Trevor, but let's take it from another angle. If a
person has
	>a copy of a paper or can remember some questions and thus
answer some
	>questions correctly in spite of not having the knowledge (but
having
	>seen the questions) it is a different story. What about a
person who
	>passes a test because of "memorising" some answers? Or in your
metaphor,
	>a person who didn't clear a height but stands on a step in the
second
	>attempt now clears the height - is this fair to those who
didn't use the
	>step?
	>
	>Hopefully the purpose of education is not merely rote learning
and
	>memorisation.
	>
	>Kindly
	>John
	>
	>John J Barnard
	>Executive Director: EPEC Pty Ltd
www.users.bigpond.com/JohnBarnard/
	>
	>DISCLAIMER:
	>The contents of this e-mail which may include one or more
attachments,
	>is confidential and is intended for the use of the named
recipient(s).
	>If you have received this e-mail in error, you are not
permitted to and
	>must not disclose, distribute or retain it, and are requested
to notify
	>the sender by return e-mail and delete it thereafter.
	>
	>It is the responsibility of the recipient(s) to ensure that the
e-mail
	>is virus free. Although EPEC uses the latest antiviral
software, we do
	>not accept responsibility for any problems caused by viruses.
	>
	>-----Original Message-----
	>From: Trevor Bond [mailto:trevor.bond at jcu.edu.au]
	>Sent: Monday, 10 October 2005 2:28 PM
	>To: John Barnard (EPEC); 'Rasch list'
	>Subject: RE: [Rasch] A practical one
	>
	>
	>Thanks John,
	>
	>this requires us to reflect on the whole nature of educational
(and
	>other) testing. High jumpers don't find it easier to clear
heights
	>just because they have failed them (been exposed to them) in
the
	>past. If mere exposure to the test improves scores: What are we
	>actually testing? and, What is the purpose of education?
	>best
	>T
	>
	>
	>At 2:12 PM +1000 10/10/05, John Barnard \(EPEC\) wrote:
	>>Thanks for your reply Trevor. Of course we would expect (at
least hope
	>  >for) improvement in students' performance over time.
However, think of
	>>this as a test for say pilots to qualify (just for argument's
sake). It
	>
	>>is thus a type of selection test rather than a "scholastic
achievement"
	>
	>>test where one would expect some growth. (It is thus rather a
	>>qualifying test and I know one can reason that those who
failed could
	>>have put in an extra effort this time.) The hypothesis is that
the
	>>repeat group was unfairly advantaged (competing for the same
places)
	>>because they have seen the items before. Through what I have
done so
	>>far, this seems to be the case. But now I want to account for
this,
	>>i.e. trying to be fair to those who have not seen the items
before.
	>>
	>>If this was not the case (and there was no additional effort)
one would
	>
	>>expect the repeat group to have approx the same mean ability
(if same
	>>item difficulty estimates are used) in the two sessions. But,
the mean
	>>ability of the repeat group increased significantly.
	>>
	>>Hope this clarifies the issue a little.
	>>
	>>Kindly
	>>John
	>>
	>>John J Barnard
	>>Executive Director: EPEC Pty Ltd
www.users.bigpond.com/JohnBarnard/
	>>
	>>DISCLAIMER:
	>>The contents of this e-mail which may include one or more
attachments,
	>>is confidential and is intended for the use of the named
recipient(s).
	>>If you have received this e-mail in error, you are not
permitted to and
	>
	>>must not disclose, distribute or retain it, and are requested
to notify
	>
	>>the sender by return e-mail and delete it thereafter.
	>>
	>>It is the responsibility of the recipient(s) to ensure that
the e-mail
	>>is virus free. Although EPEC uses the latest antiviral
software, we do
	>>not accept responsibility for any problems caused by viruses.
	>>
	>>-----Original Message-----
	>>From: Trevor Bond [mailto:trevor.bond at jcu.edu.au]
	>>Sent: Monday, 10 October 2005 1:45 PM
	>>To: John Barnard (EPEC); Rasch list
	>>Subject: Re: [Rasch] A practical one
	>>
	>>
	>>Dear John
	>>
	>>Perhaps we (or, I, at least) need some more information to
understand
	>>your problem. Kids who repeated an exam (after doing make up
work or
	>>just remembering) score better on a test the second time
round. They
	>>scored more correct responses at T2 than at T1. This seems to
be
	>>exactly what I would expect . . .or hope for . . .as an
educator.
	>>Clearly I have missed the nature of your 'problem'.
collegially
	>>Trevor
	>>
	>>At 1:32 PM +1000 10/10/05, John Barnard (EPEC) wrote:
	>>>Dear all
	>>>
	>>>I would appreciate some opinions on the following scenario.
	>>>
	>>>The same paper was given on two occasions. Some 20% of
students (let's
	>
	>>>call them the repeaters) who sat the first round (and failed)
also sat
	>
	>>>the second round. In round 1, the repeaters' mean ability is
(say) 0.5
	>
	>>>logits less than the non-repeaters'. After anchoring the item
	>>>difficulties in the first round and using them in round 2
resulted in
	>>>the repeaters now doing significantly better (say 0.7 logits
on
	>>>average). Also the repeaters now have approx the same mean
ability as
	>>>the non-repeaters in round 2, say 0.8 logits.
	>>>
	>>>The question is this: If the repeaters' mean ability
increased by 0.7
	>>>logits, how can one account for this (taking the same paper
again) to
	>>>not unfairly advantage the repeat group in round 2? (I am
aware of
	>>>learning and other factors, but let's ignore that for the
moment.)
	>>>
	>>>Kindly
	>>>John
	>>>
	>>>John J Barnard
	>>>Executive Director: EPEC Pty Ltd
www.users.bigpond.com/JohnBarnard/
	>>>
	>>>DISCLAIMER:
	>>>The contents of this e-mail which may include one or more
attachments,
	>
	>>>is confidential and is intended for the use of the named
recipient(s).
	>>   >If you have received this e-mail in error, you are not
permitted to
	>>  and
	>>
	>>>must not disclose, distribute or retain it, and are requested
to
	>>>notify
	>>
	>>>the sender by return e-mail and delete it thereafter.
	>>>
	>>>It is the responsibility of the recipient(s) to ensure that
the e-mail
	>
	>>>is virus free. Although EPEC uses the latest antiviral
software, we do
	>
	>>>not accept responsibility for any problems caused by viruses.
	>>>
	>>>
	>>>
	>>>_______________________________________________
	>>>Rasch mailing list
	>>>Rasch at acer.edu.au
http://listserv3.acer.edu.au/mailman/listinfo/rasch
	>  >
	>>
	>>--
	>>Trevor G BOND Ph D
	>>Professor and Head of Dept
	>>Educational Psychology, Counselling & Learning Needs
	>>D2-2F-01A EPCL Dept.
	>>Hong Kong Institute of Education
	>>10 Lo Ping Rd, Tai Po
	>>New Territories HONG KONG
	>>
	>>Voice: (852) 2948 8473
	>>Fax:  (852) 2948 7983
	>>Mob:
	>
	>
	>--
	>Trevor G BOND Ph D
	>Professor and Head of Dept
	>Educational Psychology, Counselling & Learning Needs
	>D2-2F-01A EPCL Dept.
	>Hong Kong Institute of Education
	>10 Lo Ping Rd, Tai Po
	>New Territories HONG KONG
	>
	>Voice: (852) 2948 8473
	>Fax:  (852) 2948 7983
	>Mob:
	>
	>
	>
	>
	>
	>_______________________________________________
	>Rasch mailing list
	>Rasch at acer.edu.au
http://listserv3.acer.edu.au/mailman/listinfo/rasch
	
>**********************************************************************
	>This message is intended for the addressee named and may
contain
	>privileged information or confidential information or both. If
you are
	>not the intended recipient please delete it and notify the
sender.
	
>**********************************************************************
	>
	>
	>
	>
	>
	>
	
>**********************************************************************
	>This message is intended for the addressee named and may
contain
	>privileged information or confidential information or both. If
you
	>are not the intended recipient please delete it and notify the
sender.
	
>**********************************************************************
	
	
	--
	Trevor G BOND Ph D
	Professor and Head of Dept
	Educational Psychology, Counselling & Learning Needs
	D2-2F-01A EPCL Dept.
	Hong Kong Institute of Education
	10 Lo Ping Rd, Tai Po
	New Territories HONG KONG
	
	Voice: (852) 2948 8473
	Fax:  (852) 2948 7983
	Mob:
	
	_______________________________________________
	Rasch mailing list
	Rasch at acer.edu.au
	http://listserv3.acer.edu.au/mailman/listinfo/rasch
	
	

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20051011/54ad4012/attachment.html 


More information about the Rasch mailing list