[Rasch] Fwd: Question Concerning Rasch Analysis on Weighted Stage Data

Bond, Trevor trevor.bond at jcu.edu.au
Thu Jul 14 09:14:46 EST 2011


Exactly, Rense!
As long as the subjects respond to enough of the vignettes (overlap), it is the analysis which determines the vignette difficulty
T


On 14/07/11 3:15 AM, "Rense Lange" <rense.lange at gmail.com> wrote:

Michael,

The way I understand your coding is that you "baked" items' difficulty right into the numbers. What I would do is to take that out again first. This is not really hard, if I understand things correctly, and if a common rating scale was used.

For instance, if item 6 had difficulty 5 and the rating was 3, you had coded: 3*5=15. So, just divide these rating products by their item's difficulty index. Then run winsteps over rating scales / partial credit items. Next, compare the whether items Rasch difficulty order is the same as the one you had in mind when you started multiplying. You'd like them to match ...

Rense

On Wed, Jul 13, 2011 at 1:30 PM, Michael Lamport Commons <commons at tiac.net> wrote:

Here's my question about the Rasch analysis.

Question:

We have been trying to measure participants' stages of development via
surveys that participants to rate their preference for vignettes that
differ in their order of hierarchical complexity on a 1 to 6 scale.. The
vignettes are stories which show characters behaving at particular
stages. My first assumption which I am not sure is an appropriate
assumption, is that the items of greater difficulty -- higher order of
hierarchical complexity) will rate. To analyze the data following Bond
and Fox, we multiplied the response codes (Likert-scale-esque numbers
1-6) by item orders of hierarchical complexity to attain a weighted
score. We then run a Rasch analysis on the weighted scores.

Is Rasch analysis appropriate for this form of instrument. The "best"
one can score on in rating a vignette instrument is 6 on the highest
order question. But let us say the participant rates 6 on every item,
even those lower in order of hierarchical complexity and therefore
difficulty. According to our measurement system, the "best" one can
score is 6 on every item. Rasch inherently assumes that all items
measure the same thing, and that their measurement is INDEPENDENT of the
other items, but this is clearly not the case. Am I wrong, and is Rasch
appropriate here?

My Best,

Michael Lamport Commons, Ph.D.
Assistant Clinical Professor

Department of Psychiatry
Beth Israel Deaconess Medical Center
Harvard Medical School
234 Huron Avenue
Cambridge, MA 02138-1328

Telephone   (617) 497-5270 <tel:%28617%29%20497-5270>
Facsimile   (617) 491-5270 <tel:%28617%29%20491-5270>
Cellular    (617) 320–0896 <tel:%28617%29%20320%E2%80%930896>
Commons at tiac.net
http://dareassociation.org/




_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/rense.lange%40gmail.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20110713/e88ef438/attachment.html 


More information about the Rasch mailing list