[Rasch] Rating Scale or Partial Credit?

Stephen.Humphry at det.wa.edu.au Stephen.Humphry at det.wa.edu.au
Wed May 17 17:24:56 EST 2006


Tom, it depends on the reason you constructed the items with the intention
of producing 'consistent' thresholds, by which I assume you mean that each
threshold has the same distance from the central (item) location for every
item. Unless it is critical to your research objectives that the thresholds
are consistent, I would look at the evidence to try to ascertain whether
there is better fit to the model when each item is allowed to have different
(centralised) thresholds, given the additional parameters. I would look at
any evidence available about fit including graphical information such as
ICCs. Generally speaking, I would be more concerned about which collection
of items measure the latent trait(s) than whether the thresholds are
necessarily consistent, although it depends on your objectives.

 

Disordered thresholds indicate a problem with the data elicited by the
relevant items, because the formal structure of the model entails a latent
Guttman response subspace, which in turn entails ordered thresholds
(Andrich, 1978, 2005). Integer scoring is a classification process in which
a score of x implies that: (i) the x lowest thresholds are exceeded and,
simultaneously, that; (ii) the m - x highest thresholds are not exceeded
(where m is the maximum score for the item). It is always a hypothesis that
the thresholds are ordered given the nature of response categories
comprising an item, and it is empirically possible for the threshold
estimates not to be in their natural order; i.e. the hypothesis is
refutable.

 

 

Andrich, D. (1978). A rating formulation for ordered response categories.
Psychometrika, 43, 357-74.

 

Andrich, D. (2005). The Rasch model explained. In Sivakumar Alagumalai,
David D Durtis, and Njora Hungi (Eds.) Applied Rasch Measurement: A book of
exemplars. Springer-Kluwer. Chapter 3, 308-328. 

 

Steve

 

 

Stephen Humphry, PhD

Senior Educational Measurement Officer, Psychometrics

Department of Education & Training

151 Royal Street, East Perth, 6004

 

Phone: +61 8 92644102

 

-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
Of Snider-Lotz, Tom
Sent: Wednesday, 17 May 2006 8:59 AM
To: rasch at acer.edu.au
Subject: [Rasch] Rating Scale or Partial Credit?

 

Hello everyone --

 

I am analyzing data from a personality instrument that employs 5-option
Likert-style items.  Inspection of the results shows that the items vary
greatly in terms of how well-spaced and well-ordered their thresholds are.
Should I be using a partial credit model, because empirically I know the
thresholds are different for each item?  Or should I use a rating scale
model, because the items were created with the intention of producing
consistent thresholds?

 

Thanks.

 

  -- Tom Snider-Lotz

 

 

___________________________________

Thomas G. Snider-Lotz, Ph.D.

Principal Scientist

 

PreVisor

1805 Old Alabama Road

Suite 150

Roswell, GA 30076

Ph:     678-832-0555

Ph:     800-281-9713 x555

Fax:    770-642-6115

 

http://www.previsor.com <http://www.previsor.com> 

tsnider-lotz at previsor.com <mailto:tsnider-lotz at previsor.com> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20060517/4ba5fb1c/attachment.html 


More information about the Rasch mailing list