[Rasch] Mean diferences and measurement error

Donald Bacon dbacon at du.edu
Tue Jun 5 08:02:18 EST 2007

I have a related question -- what about correcting for measurement error when there is also sampling error?
I've asked experts to count the number of errors in a writing sample.  I can compute the inter-rater correlation, and thus can estimate the reliability of the measures (I'm working on doing this in Facets also).  However, each sample of writing is just one sample that an individual could produce.  Therefore, there is also sampling error, which can be estimated by examining the distribution of errors within the writing sample.  Small writing samples have higher sampling error than large writing samples, even though the reliability may be the same.
Now when I examine the correlation between these ratings of writing samples and some other measure (I have an objectively-scored, Rasch-modeled measure), and I want to correct for measurement error to estimate the latent correlation, shouldn't I adjust for measurement error and sampling error?  (Does anyone have a good cite on this?)
Don Bacon
Associate Professor of Marketing
University of Denver


From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf Of Hoi Suen
Sent: Monday, June 04, 2007 1:35 PM
To: Andrés Burga León
Cc: rasch at acer.edu.au
Subject: Re: [Rasch] Mean diferences and measurement error


I don't think it is necessary to assume perfect reliability in tests of significance such as the simple t-test. Using concepts in the true score model of measurement, the standard error of the mean in the denominator of the simple t-test is the standard error of the mean of OBSERVED scores. Since observed score is the linear combination of true and measurement error scores, the standard error of the mean is in fact already sqrt(sampling error^2 + measurement error^2). The first term is the sampling error of true scores and the second term is random measurement error. It is because of these two terms that power in a t-test is reduced by heterogeneity (i.e., first term) and unreliability (i.e., second term).

This is also true within the conceptual framework of the generalizability theory. In that case, the standard error of the mean in the t-test is conceptually equivalent to the square root of the expected mean error variance in G-theory.


Andrés Burga León wrote: 


	In every statistical test, you found that if you wan't to assess mean differences you could use a simple t test: (mean1 - mean 2) / standard error. But this formula only considers the sampling error. It assumes perfect reliability.


	What about the measurement error? Why didn't consider it in the assessment of mean differences. I'm not expert in this subject, but could it be possible to made a linear composite of sampling error and measurement error? I mean something like sqrt(sampling error^2 + measurement error^2) and so assess better mean groups differences?




Hoi K. Suen, Ed. D.
Distinguished Professor
Educational Psychology
Penn State
Website: suen.ed.psu.edu 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20070604/d7a855c3/attachment.html 

More information about the Rasch mailing list