[Rasch] Re: Rasch Digest, Vol 56, Issue 9

Vahid Aryadoust vahidaryadoust at gmail.com
Fri Mar 12 20:58:28 EST 2010


I have come across this problem in tests of listening comprehension; there
is a difference between hypothesized cognitive complexity of items as the
theoretical underpinnings of the test and the observed difficulty. I am not
addressing the question of item difficulty after the intervention, rather
only how theory and Rasch measures may not contradict.

In a course I had with Prof. Linacre, I asked about bachman's (2002)
criticism of item difficulty in IRT models. Here is the paragraph:

"Although difficulty is operationalized in different ways in different
measurement models, all of these are, in my view, problematic.
First, some indicators of difficulty are averages of performance across
facets of measurement, and do not consider differential performance of
different individuals. The proportion correct ("p-value") of classical test
theory, for example, is the average performance across test-takers on a
given item, while the mean for a given facet of measurement in
generalizability theory is an average across test-takers, not an individual
estimate. The IRT b parameter is simply the intercept on the latent ability
scale that corresponds to an arbitrarily specified probability of getting
the item correct. As with the classical "p-value", this "difficulty"
estimate is defined d with reference to the probability of getting the item
correct. Nevertheless, the item characteristic function clearly illustrates
that this probability varies across ability levels. That is, the item
characteristic function operationalizes the interaction between the latent
ability and performance on a given item, so that what we call "difficulty"
is really nothing more than an artifact of this interaction. The
multifaceted Rasch b parameter and logit capture essentially the same types
of interactions as the IRT b parameter." (pp. 466-68)

And here is part of Mike's response:
"...What we call "item difficulty" and "person ability" are not the
*true*item difficulty or
*true* person ability. They are merely a statistical summary of the observed
behavior in our artificial testing situation, expressed as a measurement on
a hypothetical latent variable...." (I have truncated Mike's explanations)

Bachman, therefore and for example, would not agree with some methodological
studies such as those by Staler and Briendly on listening tests developed by
ETS.

If we have a different take than Bachman on the issue, I believe the problem
is that we may not have all factors or variables that contribute to the
location of items on the latent trait scale. Then, the cognitive complexity
appraoch may be useful but is still obscured in the sense that it does not
include some factors which are decisive in determining the location of
items. I have had trouble understanding the complexity of cognitive
processes underpinning listening comprehension items; my theoretical
framework only captured some part of the complexity; so, I observed stark
contrasts between what I expected and what was observed after fitting the
Rasch model into the data.

Thanks
Vahid

On Fri, Mar 12, 2010 at 4:26 PM, <rasch-request at acer.edu.au> wrote:

> Send Rasch mailing list submissions to
>        rasch at acer.edu.au
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> or, via email, send a message with subject or body 'help' to
>        rasch-request at acer.edu.au
>
> You can reach the person managing the list at
>        rasch-owner at acer.edu.au
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Rasch digest..."
>
>
> Today's Topics:
>
>   1. RE: Rasch model affected by item type or not (Parisa Daftari Fard)
>   2. Re: RSM & PCM (Purya Baghaei)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 11 Mar 2010 22:45:53 -0800 (PST)
> From: Parisa Daftari Fard <pdaftaryfard at yahoo.com>
> Subject: RE: [Rasch] Rasch model affected by item type or not
> To: iasonas <liasonas at cytanet.com.cy>, rasch list <rasch at acer.edu.au>
> Message-ID: <121321.37875.qm at web113306.mail.gq1.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
>
> Thank you for your and Rense's reply.
>
> By item type, any kind of item theoretically measures one part of
> unidimentional construct. For example in reading comprehension, one item
> measures main idea and one measures guessing or factual question or any
> other kind of item. Do you think learning would change the item location in
> Rasch scale? Does anyone know about any research in this respect.
>
> Or let me say in Agustin's term. Agustin mentions
>
> " it is expected that before and after intervention you shall move all the
> items in the same direction and the relative position will remain. But if it
> is not the same construct (competencies for elderly persons) then after
> intervention some trait must improve (physical capability to move an
> arm) and other must reduce (incontinency), the relative position will change
> and this will reflect the quality and usefulness of the intervention. "
>
> Can we say that we do have a static competence. What if interactional
> competence or dynamic nature of competence in a chaos/complexity model be
> correct. Is always main idea more difficult than factual question. In a
> single test we (i and Rense Lange) did not come up with this result
> http://www.rasch.org/rmt/rmt232.pdf
>
> Therefore, do you think that any construct can have a static nature to show
> a single pattern after and before intervention in Rasoul's term?
>
>
>
> Best,
> Parisa
>
> --- On Thu, 3/11/10, iasonas <liasonas at cytanet.com.cy> wrote:
>
>
> From: iasonas <liasonas at cytanet.com.cy>
> Subject: RE: [Rasch] Rasch model affected by item type or not
> To: "'Rense Lange'" <rense.lange at gmail.com>, "'Parisa Daftari Fard'" <
> pdaftaryfard at yahoo.com>, Rasch at acer.edu.au
> Date: Thursday, March 11, 2010, 10:07 PM
>
>
>
> i agree with Rense, but just a friendly word of caution.
>
> If by item "type" you mean the format of the item i.e. multiple choice or
> short response, then in many cases this may affect the results a little bit
> because the item may tap on a slightly different dimension. I am trying to
> say that what Rense suggests happens only the model-data fit is practically
> satisfactory; otherwise, invariance does not hold
>
> thank you
> --- On Thu, 3/11/10, iasonas <liasonas at cytanet.com.cy> wrote:
>
>
> From: iasonas <liasonas at cytanet.com.cy>
> Subject: RE: [Rasch] Rasch model affected by item type or not
> To: "'Rense Lange'" <rense.lange at gmail.com>, "'Parisa Daftari Fard'" <
> pdaftaryfard at yahoo.com>, Rasch at acer.edu.au
> Date: Thursday, March 11, 2010, 10:07 PM
>
>
>
> i agree with Rense, but just a friendly word of caution.
>
> If by item "type" you mean the format of the item i.e. multiple choice or
> short response, then in many cases this may affect the results a little bit
> because the item may tap on a slightly different dimension. I am trying to
> say that what Rense suggests happens only the model-data fit is practically
> satisfactory; otherwise, invariance does not hold
>
> thank you
>
>
>
> From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On
> Behalf Of Rense Lange
> Sent: Thursday, March 11, 2010 7:32 PM
> To: Parisa Daftari Fard; Rasch at acer.edu.au
> Subject: Re: [Rasch] Rasch model affected by item type or not
>
>
> Item later in the test tend to be affected due to fatigue, running out of
> time, etc. and short tests are more sensitive to changes in items. But,
> other than that, items won't change their relative locations regardless what
> else is in the test. There may be random error. Just take any Rasch data
> set, calibrate all items, then rerun with some items removed. Unless very
> few items remain, a scatter plot of the non-removed items in these two cases
> will parallel Y=X.
>
>
> On 3/10/10, Parisa Daftari Fard <pdaftaryfard at yahoo.com> wrote:
>
>
>
>
>
> Dear Rasch list members,
> Hi
>
> I came up with a question and I hope to have your reply. Does test
> configuration (the ordering of the item types and the adding or subtracting
> the item types) affect Rasch outcome in terms of the hierarchical order in
> item map?
>
> Does Rasch approach the item generically or pragmatically? by Generic I
> mean that adding one item type (not the total number but the cognition
> related item type) would not affect the pattern of items because the
> statistics is doing something on the nature of the item.
>
> I am not sure if I am clear?
>
> Best,
> Parisa
>
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> https://mailinglist.acer.edu.au/mailman/listinfo/rasch
>
>
>
>
> --
> Rense Lange, Ph.D.
> via gmail
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.733 / Virus Database: 271.1.1/2735 - Release Date: 03/10/10
> 21:33:00
>
>
>
>
>
>
> -------------------------------------------------
> Please consider the environment before you print
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100311/d68b3518/attachment-0001.html
>
> ------------------------------
>
> Message: 2
> Date: Fri, 12 Mar 2010 09:25:40 +0100
> From: Purya Baghaei <puryabaghaei at gmail.com>
> Subject: Re: [Rasch] RSM & PCM
> To: rasch at acer.edu.au
> Message-ID:
>        <e9b79ce11003120025m4b28800ftee952a18231a9606 at mail.gmail.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Anthony and Thomas,
>
> Andrich (1985) proposed a model called „equidistant“ or DLIM model, I
> think.
> This model assumes that the distances between the thresholds within the
> items are equal but not necessarily across the items. The model was
> suggested to account for local dependency in educational tests where
> several
> items are based on one prompt by forming testlets. The assumption of equal
> distances between thresholds within items in educational tests sounds
> rather
> impractical. I’m not sure if it’s implemented in any software. Does anyone
> out there know of a Rasch programme that fits equidistant model? Or is it
> possible to fit the model with some command statements in Winsteps,
> ConQuest
> or RUMM?
>
> Regards
>
> Purya
>
>
> On Wed, Mar 10, 2010 at 2:52 PM, Rodney Staples
> <rodstaples at ozemail.com.au>wrote:
>
> >  Hi Anthony and Thomas,
> >
> > There is a very full discussion of the distinction between Likert scales
> > and Rasch Partial Credit models in Bond And Fox, Applying the Rasch
> Model,
> > Chapter 6.
> >
> >
> >
> > A different example drawn from a satisfaction survey is on my site at:
> > http://members.ozemail.com.au/~rodstaples/Measurement3.htm<http://members.ozemail.com.au/%7Erodstaples/Measurement3.htm>
> <http://members.ozemail.com.au/%7Erodstaples/Measurement3.htm>
> >
> >
> >
> > Hope this helps,
> >
> > Rod
> >
> >
> >
> >
> >
> >
> >
> >
> ___________________________________________________________________________
> >
> > Dr. Rodney Staples.
> >
> > e-mail: rodstaples at ozemail.com.au
> >
> > Telephone: +61 3 9770 2484
> >
> > Mobile: +61 4 1935 9082
> >
> > Web: http://members.ozemail.com.au/~rodstaples/<http://members.ozemail.com.au/%7Erodstaples/>
> <http://members.ozemail.com.au/%7Erodstaples/>
> >
> >
> >
> >
> >
> > -----Original Message-----
> > *From:* rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au]*On
> > Behalf Of *Thomas Salzberger
> > *Sent:* Thursday, 11 March 2010 12:02 AM
> > *To:* rasch at acer.edu.au
> > *Subject:* Re: [Rasch] RSM & PCM
> >
> >
> >
> > At 13:42 10.03.2010, you wrote:
> >
> >  Thanks Thomas,
> > It seems that these are just a set of  assumptions that we have about our
> > data. I was under the impression that when we talk about unequal
> distances
> > either within or across the items we model the distances and weight them
> > accordingly. That is, each category gets a different score depending on
> its
> > difficulty. Something along these lines. I think there are some models
> which
> > requie this, aren't there?
> > So we do not need to have such complicated modelling.
> > We just choose the type of the analysis depending on what we think of our
> > data. Right?
> >
> >
> >
> > That is exactly right. Sometimes a common rating scale makes sense. One
> > could at least try it.
> > Obviously it does not make sense when the categories are worded
> differently
> > and it is impossible to run the RSM when the number of categories varies.
> > (That said, you can actually have several RSMs within your instrument
> with
> > some items sharing a common rating scale structure and others not.)
> >
> > The important thing is that weighting category scores (or, in general,
> item
> > scores) is never related to the difficulty of an item (we do not weight
> > difficult dichotomous items higher than easy ones). This is always the
> case,
> > even in general IRT.
> >
> > Weighting refers to discrimination. In the 2pl, items are weighted
> > differently because of different discrimination, not because of different
> > difficulty.
> >
> > In the RSM as well as in the PCM, the discrimination is assumed to be
> equal
> > as this is a key property of the Rasch model.
> > However, in the PCM this fact is somewhat obscured by the fact that
> > different threshold distances between items lead to ICCs which do
> intersect.
> >
> > But at the level of each threshold, the latent response curves are in
> fact
> > parallel.
> >
> > If it helps to illustrate the last point, I might send you a graph from
> > RUMM which illustrates this nicely.
> >
> > Thomas
> >
> >
> >
> >  Anthony
> >
> > --- On *Wed, 3/10/10, Thomas Salzberger <thomas.salzberger at gmail.com
> >*wrote:
> >
> > From: Thomas Salzberger <thomas.salzberger at gmail.com>
> >
> > Subject: Re: [Rasch] RSM & PCM
> >
> > To: rasch at acer.edu.au
> >
> > Date: Wednesday, March 10, 2010, 6:13 AM
> >
> > Anthony,
> >
> > let us assume we have a four category item, so there are three thresholds
> > (0/1, 1/2 and 2/3, referred to as tau1, tau2 and tau3, respectively)
> >
> > In the Rating scale model, the distance between the thresholds tau1 and
> > tau2 does NOT need to be equal to the distance between tau2 and tau3.
> >
> > But the difference between tau1 and tau2 has to be equal across all
> items.
> > Likewise the difference between tau2 and tau3 has to be the same for all
> > items.
> >
> > So, no restrctions within the item but restrictions across items.
> >
> > In other words, in the PCM, each item has its own rating scale structure,
> > while in the rating scale model we have a common rating scale structure
> > across all items.
> >
> > The RSM is therefore more restrictive. Whether the PCM fits statistically
> > significantly better than the RSM can be tested by a likelihood ratio
> test.
> >
> > What you have in mind, a model where all distances between pairs of
> > adjacent thresholds are equal, would be even more restrictive than the
> RSM.
> >
> > At 12:39 10.03.2010, Anthony James wrote:
> >
> > I was just wondering how PCM accomodates unequal distances when we do not
> > model them.
> >
> > I am sorry, I don't get this statement. When we do not model unequal
> > distances (across items), i.e. we model equal distances, we do not apply
> the
> > PCM.
> >
> >
> >
> >  We just sum up correct responses on each polytomy and analyse it.
> >
> >
> >
> > We always do that. If it's a Rasch model, then raw score sufficiency
> holds.
> >
> > Thomas
> >
> >
> >  A sum score is in fact given to the analysis and not modelled distances
> > among items. Doesn't here a PCM reduce to an RSM?
> >
> > Cheers
> >
> > Anthony
> >
> > --- On Wed, 3/3/10, Anthony James <luckyantonio2003 at yahoo.com> wrote:
> >
> > From: Anthony James <luckyantonio2003 at yahoo.com>
> >
> > Subject: [Rasch] RSM & PCM
> >
> > To: rasch at acer.edu.au
> >
> > Date: Wednesday, March 3, 2010, 2:17 AM
> >
> > --
> >
> > Dear All,
> >
> > I know that this is a very old and probably a boring question for many of
> > you. But I need to know this
> >
> > What is the difference between rating  scale model and partial credit
> > model?
> >
> > What I have gathered is that in RSM the distances between the points on
> the
> > scale is equal and this distance is the same for all the items in the
> > instrument. That is, the ability difference needed to endorse 3 rather
> than
> > 2 is the same as the ability difference needed to endorse 5 rather than
> 4.
> > Right?
> >
> > In PCM, however, the distances between points on the scale is unequal
>  both
> > within the items and between the items in the instrument. That is, the
> > ability increment to score 3 on an item rather than 2 is not the same as
> the
> > ability increment needed to score 6 rather than 5. And these distances
> are
> > unequal among  the items in the test. Right?
> >
> > Cheers
> >
> > Anthony
> >
> >
> >
> >
> >  -----Inline Attachment Follows-----
> >
> > _______________________________________________
> >
> > Rasch mailing list
> >
> > Rasch at acer.edu.au <http://??.htm>
> >
> > https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> >
> >  _______________________________________________
> >
> > Rasch mailing list
> >
> > Rasch at acer.edu.au
> >
> > https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> >
> > _______________________________________________________
> >
> > Dr. Thomas Salzberger
> >
> > http://www2.wu-wien.ac.at/marketing/user/salzberger/
> >
> > http://www.wu.ac.at/statmath/faculty_staff/lecturer/salzberger
> >
> > http://www.wu.ac.at/mm/team/salzberg
> >
> > Email: Thomas.Salzberger at wu.ac.at, Thomas.Salzberger at gmail.com
> >
> >
> >
> >  *"You can exist without wine but you cannot live..." Jack Mann *
> >
> >
> >
> >  Measurement in Marketing - An alternative framework:
> > http://www.e-elgar-business.com/Bookentry_DESCRIPTION.lasso?id=13315
> >
> > Copenhagen 2010 International Conference on Probabilistic Models for
> > Measurement: http://www.matildabayclub.net ,
> http://www.rasch2010.cbs.dk/
> > <http://www.rasch2010.cbs.dk/>
> >
> > The Matilda Bay Club: http://www.matildabayclub.net
> >
> > Rasch Courses: http://www.education.uwa.edu.au/ppl/courses,
> > http://home.btconnect.com/Psylab_at_Leeds/
> > <http://home.btconnect.com/Psylab_at_Leeds/>
> >
> > der markt - Journal für Marketing: http://www.springer.com/dermarkt
> >
> > Präferenzanalyse mit R @ Amazon:
> >
> http://www.amazon.de/Pr%C3%A4ferenzanalyse-mit-Anwendungen-Behavioural-Management/dp/3708903854/ref=sr_1_2?ie=UTF8&s=books&qid=1243162762&sr=1-2-------------------------------------------------
> > Please consider the environment before you print
> >
> > -----Inline Attachment Follows-----
> >
> > _______________________________________________
> >
> > Rasch mailing list
> >
> > Rasch at acer.edu.au <http://mc/compose?to=Rasch@acer.edu.htm>
> >
> > https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> >
> > ------------------------------------------------- Please consider the
> > environment before you print
> >  ------------------------------------------------- Please consider the
> > environment before you print
> >
> >
> > _______________________________________________
> > Rasch mailing list
> > Rasch at acer.edu.au
> > https://mailinglist.acer.edu.au/mailman/listinfo/rasch
> >
> >
>
>
> --
> Purya Baghaei, Ph.D
> English Department,
> Islamic Azad University,
> Ostad Yusofi Str.
> Mashad, Iran.
> Phone: +98 511 6634763
>
> -------------------------------------------------
> Please consider the environment before you print
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100312/1edc63b9/attachment.html
>
> ------------------------------
>
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> https://mailinglist.acer.edu.au/mailman/listinfo/rasch
>
>
> End of Rasch Digest, Vol 56, Issue 9
> ************************************
>

-------------------------------------------------
Please consider the environment before you print
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20100312/f56b4cda/attachment.html 


More information about the Rasch mailing list