[Rasch] Facets feature or bug?

Purya Baghaei puryabaghaei at gmail.com
Mon Apr 2 17:50:14 EST 2012


Hi Jason,
I think you should use group-anchoring. Set that the mean leniency of
the raters is the same. Anchor the raters at 0. This should take care
of the unconnected data.

Purya

On 4/2/12, Bond, Trevor <trevor.bond at jcu.edu.au> wrote:
> Jason, I think this covers it:
> Linacre (1997) displayed three judging rosters for ratings from the Advanced
> Placement Program of the College Board. The complete judging plan of 1,152
> ratings illustrates the ideal plan for both conventional and Rasch analysis.
> This complete judging plan meets the connection requirement between all
> facets because every element (essays, examinees, and judges) can be compared
> directly and unambiguously with every other element.
> A much less judge-intensive plan of only 180 ratings also is displayed, in
> which less precise Rasch estimates can be obtained because the facet-linking
> overlap is maintained. The Rasch measures would be less precise than with
> complete data because 83% fewer observations are made. Linacre¹s final table
> reveals the minimal judging plan, in which each of the 32 examinees¹ three
> essays is rated by only one judge. Each of the 12 judges rates eight essays,
> including two or three of each essay type, so that the examinee­judge essay
> overlap of these 96 ratings still enables all parameters to be estimated
> unambiguously in one frame of reference.
> Of course, the saving in judges¹ costs needs to be balanced against the cost
> of low measurement precision, but this plan requires only 96 ratings, 8% of
> the observations required for the complete judging plan. Lunz et al. (1998)
> reported the successful implementation of such a minimal judging plan
> (Linacre, 1997).
> B&F 2 p149
>
>
> On 2/04/12 4:53 PM, "Iasonas Lamprianou" <liasonas at cytanet.com.cy> wrote:
>
>>
>> Dear all,
>> I send this question to all, and not only to Mike, because this question
>> is
>> both related to the Facets software, but is a methodological question as
>> well.
>>
>> I am running a "typical" scenario where I have markers who mark the
>> responses
>> of students to a test. The markers do not see the whole test, but only
>> individual questions. We do NOT have double marking. So, lets say that we
>> have
>> 1000 students, each one responding to 10 questions. In effect, we have
>> 10.000
>> responses. Lets say that each one of the 10.000 responses is randomly sent
>> once to one marker. We have 20 markers in total.
>>
>> Observation 1: the 3-d matrix markersXitemsXstudents is VERY sparse (we
>> will
>> all agree on that) because we have NO double marking
>> Observation 2 which is a question as well: I think that the design is NOT
>> linked (no double marking), does everyone agree? However, Facets does not
>> complain about disconnected subsets, I do not know why. Should I not
>> worry?
>> Does Facets assume that because of randomness, all markers are on the same
>> scale? Is Facets confused and incorrectly thinks that the design is NOT
>> disconnected?
>>
>> Question: If disconnected subsets is a problem in this case, how can I run
>> an
>> anlysis in order to identify marker effects using this dataset?
>>
>> Thank you for your help
>>
>> Jason
>> _______________________________________________
>> Rasch mailing list
>> Rasch at acer.edu.au
>> Unsubscribe:
>> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jcu.edu.au
>>
>
>
> _______________________________________________
> Rasch mailing list
> Rasch at acer.edu.au
> Unsubscribe:
> https://mailinglist.acer.edu.au/mailman/options/rasch/puryabaghaei%40gmail.com
>


-- 
Purya Baghaei, Ph.D
English Department,
Islamic Azad University,
Ostad Yusofi St.
91871-Mashhad, Iran.
Phone: +98 511 6635064-5
Fax: +98 511 6634763



More information about the Rasch mailing list