[Rasch] FACETS design

Mike Linacre mike at winsteps.com
Fri Feb 15 12:40:44 AEDT 2019

Dear Ricardo and friends:

Disconnections in the data are always challenging. Fabio, Rense, and 
Trevor have suggested options. Fundamentally, we are forced to make 
assumptions about relationships. If we do not, then the software will 
fail or impose arbitrary relationships.

If we know something about the raters, tasks, examinees, items, etc., we 
may be able to use "virtual equating" (equating by content - 
www.rasch.org/rmt/rmt193a.htm ) to resolve some of the relationships. 
Otherwise, equating the mean ability/difficulty/leniency of the 
disconnected groups may give a reasonable result. There is usually a 
choice of which aspects of the design to equate or align, so we can try 
the alternatives to see which produces the more reasonable results. 
However, if the decisions based on these results are high-stakes, we may 
need to do more data collection.

Mike L.

On 2/13/2019 2:50 AM, Ricardo Primi wrote:
> Dear List members, Fabio, Rense and Trevor
> Thanks a lot Fabio, Resne and Trevor for a very interesting discussion!
> I am aware that the design I proposed it is impossible to create 
> connectedness and link person measures across countries (as was nicely 
> portrayed by Fabio’s table). But that is the challenge. I am very 
> interested to know if anyone have faced this challenge or a similar 
> one before and what was proposed as a complementary design to try to 
> solve this linking problem.
> By reading literature (Eckes, 2015; Myford, & Wolfe, 2003, 2004) I 
> found that one possible way would be: a two-step anchor/linking 
> design. Imagine we have a writing essay task scored by raters using 4 
> items (quality, cohesion, originality and correct use of language) 
> with a 3-point scale rubric. Assume we have a single dimension of 
> writing ability (I know this could not be true but let’s assume that 
> just as an example to focus in in the aspects of the problem) We could 
> select writing essays from, say, 80 persons, representing various 
> levels of ability (low, medium, high) and translate into various 
> languages and create a common task for all raters from all countries. 
> I would call this anchor person group. In Step A we run 3 facets 
> design person (anchors) vs items vs rater. All raters from all 
> countries score this group. With this dataset we can then calibrate 
> raters/item parameters. We may also investigate bias and DRF and DIF. 
> Therefore, we from step 1 we save raters and item parameters.
> Then we go to step 2, that is the example I presented in earlier 
> e-mail. But now we know rater/item parameters and we can fix them on 
> the values found in step 1. So although there is no connectedness we 
> “borrow” the connection found in step 1. I see this similar to anchor 
> test design. The difference is that this is actually a common person 
> design. In step two we do a design similar to fixed item/rater 
> calibration and putting other parameters specifically person measures 
> on the frame of reference found in Step1.
> What do you think? Should this work? Other ideas for designs?
> Best
> Ricardo
Mike Linacre, mike at winsteps.com or winsteps1234 at gmail.com
Winsteps 4.3.4 and Facets 3.81.2 - www.winsteps.com 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20190215/553a24bc/attachment.html>

More information about the Rasch mailing list