[Rasch] Facets feature or bug?

Tom Bramley Bramley.T at cambridgeassessment.org.uk
Tue Apr 3 00:40:30 EST 2012


Dear Jason,
One aspect which hasn't been stressed in the replies so far is the importance of 'plan' or 'design'.  Your original email suggested random allocation.  The more sparse the data set, the more likely (it seems to me) that random allocation will result in disjoint subsets.  I would say don't leave it to chance!

In the two-facet case (persons and items) with missing data we can easily imagine swapping rows and columns of the person-item matrix to reveal two non-overlapping rectangles - a subset of items that has only been answered by a subset of the persons.  If it is not possible to create these non-overlapping rectangles by swapping rows and columns then you are OK.  The same thing presumably applies also in the 3-facet case, although it is much harder to visualise - hence the need for a connectedness testing algorithm as referred to in the Linacre (1997) link below.

Regards,
Tom.

Tom Bramley
Assistant Director, Research Division
Assessment Research & Development

Cambridge Assessment
1 Regent Street, Cambridge, CB2 1GG
Direct Dial: 01223 553985
www.cambridgeassessment.org.uk

Cambridge Assessment is the brand name of the University of Cambridge Local Examinations Syndicate, a department of the University of Cambridge. Cambridge Assessment is a not-for-profit organisation.



-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf Of Iasonas Lamprianou
Sent: 02 April 2012 15:16
To: rasch at acer.edu.au
Subject: Re: [Rasch] Facets feature or bug?

Dear all,
in the first edition of Bond and Fox book (the one I have), the reference to Lunz et al. (1990) is not included. It is also not clear what the exact reference is for Linacre (1997). Could (anyone, if Trevor is not avaiable) confirm that we refer to those two references?

Linacre, M. (1997). MESA Release note #3. Reached at http://www.rasch.org/rn3.htm on April 2nd, 2012

Lunz, M.E., Wright, B.D., Linacre, J.M. (1990) Measuring the Impact of Judge Severity on Examination Scores. Applied Measurement in Education, 3(4), 331-345.


Also, at the risk of sounding naive (or even a little bit careless) I am not quite sure if I understand why this "minimal design" is enough to calibrate and compare the raters in the sense that each rater has seen a different set of observations. If anyone can offer me a simple explanation which I could also pass to other people if asked, that would be great.

Thank you, not only for your support, but also for your patience.

Jason


----- Original Message Follows -----
From: Iasonas Lamprianou <liasonas at cytanet.com.cy>
To: <rasch at acer.edu.au>
Subject: Re: [Rasch] Facets feature or bug?
Date: Mon, 2 Apr 2012 11:58:20 +0300
> thanx. googled it and found the tablea you mention. ----- Original 
> Message Follows -----From: "Bond, Trevor" <trevor.bond at jcu.edu.au>To: 
> "rasch at acer.edu.au" <rasch at acer.edu.au>Subject: Re: [Rasch] Facets 
> feature or bug?Date: Mon, 2 Apr 2012 01:54:35 -0700> Sure, JasonAll 
> your data seem to be linked via the studentsAs they are in the example 
> givenCheck Mike's paperTOn 2/04/12 6:11 PM, "Iasonas Lamprianou" 
> <liasonas at cytanet.com.cy> wrote:> > thank you Trevor> > i am afraid 
> that i only have your edition one of the book, can i find this> 
> information there as well? also i apologise for not understanding 
> fully your> position so i need a clarification. your position is that 
> it is not a bug and> facets can recover some useful info to compare 
> the raters although no double> marking of any magnitude exists. am i 
> missing something?> > ----- Original Message Follows -----> From: 
> "Bond, Trevor" <trevor.bond at jcu.edu.au>> To: "rasch at acer.edu.au" 
> <rasch at acer.edu.au>> Subject: Re: [Rasch] Facets feature or bug?> 
> Date: Mon, 2  Apr 2012 00:30:29 -0700>> Jason, I think this covers 
> it:>> Linacre (1997)> displayed three judging rosters for ratings from 
> the Advanced>> Placement Program of the College Board. The complete 
> judging plan of 1,152>> ratings illustrates the ideal plan for bo th 
> conventional and Rasch analysis.>> This complete judging plan meets 
> the connection requirement between all>> facets because every element 
> (essays, examinees, and judges) can be compared>> directly and 
> unambiguously with every other element.>> A much less judge-intensive 
> plan of only 180 ratings also is displayed, in>> which less precise 
> Rasch estimates can be obtained because the facet-linking>> overlap is 
> maintained. The Rasch measures would be less precise than with>> 
> complete data because 83% fewer observations are made. Linacre?s final 
> table>> reveals the minimal judging plan , in which each of the 32 
> examinees? three>> essays is rated by only one> judge. Each of the 12 
> judges rates eight essays,>> including two or three of each essay 
> type, so tha t the examineejudge essay>> overlap of these 96 ratings 
> still enables all parameters to be estimated>> unambiguously in one 
> frame of reference.>> Of course, the saving in judges? costs needs to 
> be balanced against the cost>> of low measurement precision, but  this 
> plan requires only 96 ratings, 8% of>> the observations required for 
> the complete judging plan. Lunz et al. (1998)>> reported the 
> successful implementation of such a minimal judging plan>> (Linacre, 
> 1997).>> B&F 2 p149>> >> >> On 2/04/12 4:53 PM, "Iasonas Lamprianou" 
> <liasonas at cytanet.com.cy> wrote:>> >>> >>> Dear all,>>> I send this 
> question to all, and not only to Mike, because this question is>>>
both
> related to the Facets software, but is a methodological question 
> as>>>> well.>>> >>> I am running a "typical" scenario where I have 
> markers who mark the>>> responses>>> of students to a test. The 
> markers do not see the whole test, but only>>> individual questions. 
> We do NOT have double marking. So, lets say that we>>> have>>> 1000 
> students, each  one responding to 10 questions. In effect, we have>>> 
> 10.000>>> responses. Lets say that each one of the 10.000 responses is 
> randomly sent>>> once to one marker. We have 20 markers in total.>>> 
> >>> Observation 1: the 3-d matrix markersXitemsXstudents is VERY spa 
> rse (we will>>> all agree on that) because we have NO double 
> marking>>> Observation 2 which is a question as well: I think that the 
> design is NOT>>> linked (no double marking), does everyone agree? 
> However, Facets does not>>> complain about disconnected subsets, I do 
> not know why. Should I not worry?>>> Does> Facets assume that because 
> of randomness, all markers are on the same>>> scale? Is Facets 
> confused and incorrectly thinks that the design is NOT>>> 
> disconnected?>>> >>> Question: If disconnected subsets is a problem in 
> this case, how can I run>>> an>>> anlysis in order to identify marker 
> effects using this dataset?>>> >>> Thank you for your help>>> >>> 
> Jason>>> _______________________________________________>>> Rasch 
> mailing list>>> Rasch at acer. edu.au>>> Unsubscribe: >>> 
> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jc
> u.edu.>>> au>>> >> >> >> 
> _______________________________________________>> Rasch mailing list>> 
> Rasch at acer.edu.au>> Unsubscribe: >> 
> https://mailinglist.acer.edu.au/mai 
> lman/options/rasch/liasonas%40cytanet.com.>> cy> 
> _______________________________________________> Rasch mailing list>> 
> Rasch at acer.edu.au> Unsubscribe: > 
> https://mailinglist.acer.edu.au/mailman/options/rasch/trevor.bond%40jc
> u.edu.au_______________________________________________Rasch mailing 
> listRasch at acer.edu.auUnsubscribe: 
> https://mailinglist.acer.edu.au/mailman/options/rasch/liasonas%40cytan
> et.com.cy_______________________________________________Rasch mailing 
> listRasch at acer.edu.auUnsubscribe: 
> https://mailinglist.acer.edu.au/mailman/options/rasch/liasonas%40cytan
> et.com.cy
_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
Unsubscribe: https://mailinglist.acer.edu.au/mailman/options/rasch/bramley.t%40ucles.org.uk

 Click https://www.mailcontrol.com/sr/wQw0zmjPoHdJTZGyOCrrhg==  to report this email as spam.


If you are not the intended recipient, employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination or copying of this communication and its attachments is strictly prohibited. 
If you have received this communication and its attachments in error, please return the original message and attachments to the sender using the reply facility on e-mail. 
Internet communications are not secure and therefore Cambridge Assessment (the brand name for the University of Cambridge Local Examinations Syndicate, the constituent elements of which are CIE, ESOL and OCR [Oxford Cambridge and RSA Examinations is a Company Limited by Guarantee Registered in England.  Registered office: 1 Hills Road, Cambridge CB1 2EU.  Company number: 3484466]) does not accept legal responsibility for the contents of this message. 
Any views or opinions presented are solely those of the author and do not necessarily represent those of Cambridge Assessment unless otherwise specifically stated.  
The information contained in this email may be subject to public disclosure under the Freedom of Information Act 2000. Unless the information is legally exempt from disclosure, the confidentiality of this email and your reply cannot be guaranteed.



More information about the Rasch mailing list