[Rasch] many facet Rasch model

Ryan Patrick Bowles rpb3b at cms.mail.virginia.edu
Fri Mar 10 08:52:17 EST 2006


Barth's suggestion was quite reasonable and a good idea 
when rater overlap is not possible. Randomly equivalent is 
a perfectly justifiable statistical assumption. It is the 
basis of random assignment in clinical trials: the control 
and treatment groups must be randomly equivalent. In 
reality, they never are perfectly equivalent, but provided 
the sample is large enough, the groups will be close enough 
to equivalent that differences are negligible.

For this example, linking different rater groups requires 
only an assumption of equal means, with no assumptions 
about higher moments. This is a relatively weak 
requirement. Therefore, random equivalence is sufficient to 
yield good linking with relatively small group sizes. Five, 
however, is not a large enough group size to make an 
assumption of equivalence as a result of random assignment 
reasonable.

The comparison with test books is not entirely appropriate. 
There are two sources of sampling error: assigning persons 
to test books; and assigning items to test books. Noting 
large differences cannot be attributed to failure of random 
equivalence without further information.

Ryan

--On Thursday, March 09, 2006 9:17 PM +0000 
rsmith.arm at att.net wrote:

>
>
> Interesting idea, Randomly Equivalent!  When I learned
> statistics random assignment ment random not randomly
> equivalent.  In state testing programs, where test books
> are randomly assigned, I have seem mean ability
> differences as high as 0.5 logits across forms.  Hardly
> equivalent.  Any equating based on the idea that the
> groups were randomly equivalent is doomed to failure.
> --
> Richard M. Smith
> 12276 Arbor Lakes Parkway North
> Maple Grove, MN 55369
> voice(w): 763-268-2282
> voice(h): 763-494-5047
>
>
>
>
>
> -------------- Original message from "Barth Riley"
> <barthr at uic.edu>: --------------
>
>
>
> Hi Susan
>
>
>
> What will most likely happen is that Facets will be
> unable to link ratings of exhibitions across panels.
> Therefore, it will not be possible to compare an exhibit
> rated by panel #1 to an exhibit rated by panel #2, etc.
> If data collection is still ongoing, I would strongly
> encourage moving raters to multiple panels to ensure a
> linkage across panels. The alternative strategy is to
> randomly assign raters to panels. Then we would assume
> that the panels are “randomly equivalent” and then anchor
> each panel to a common logit value, typically 0. ‘This
> group anchoring method can sometimes, though not always,
> allow disjointed subsets in the data to be connected by
> Facets. Generally, the more overlap, the better.
>
>
>
> Barth
>
>
>
>
>
> Barth Riley, Ph.D.
>
> Res. Asst. Professor & Associate Program Director
>
> Dept. of Disability and Human Development M/C 626
>
> University of Illinois-Chicago
>
> 1640 W. Roosevelt Rd.
>
> Chicago, IL 60608
>
> Voice (312) 355-4054
>
> Fax:   (312) 355-4058
>
> Email: barthr at uic.edu
>



Ryan Bowles
Department of Psychology
University of Virginia
P.O. Box 400871
Charlottesville, VA 22904-4871
434-982-6508
rpbowles at virginia.edu




More information about the Rasch mailing list