[Rasch] Mean diferences and measurement error

Hoi Suen HoiSuen at psu.edu
Tue Jun 5 05:34:37 EST 2007


I don't think it is necessary to assume perfect reliability in tests of 
significance such as the simple t-test. Using concepts in the true score 
model of measurement, the standard error of the mean in the denominator 
of the simple t-test is the standard error of the mean _of OBSERVED 
scores_. Since observed score is the linear combination of true and 
measurement error scores, the standard error of the mean is in fact 
already sqrt(sampling error^2 + measurement error^2). The first term is 
the _sampling_ error of _true_ scores and the second term is random 
measurement error. It is because of these two terms that power in a 
t-test is reduced by heterogeneity (i.e., first term) and unreliability 
(i.e., second term).

This is also true within the conceptual framework of the 
generalizability theory. In that case, the standard error of the mean in 
the t-test is conceptually equivalent to the square root of the expected 
mean error variance in G-theory.


Andrés Burga León wrote:
> In every statistical test, you found that if you wan't to assess mean 
> differences you could use a simple t test: (mean1 -- mean 2) / 
> standard error. But this formula only considers the sampling error. It 
> assumes perfect reliability.
> What about the measurement error? Why didn't consider it in the 
> assessment of mean differences. I'm not expert in this subject, but 
> could it be possible to made a linear composite of sampling error and 
> measurement error? I mean something like sqrt(sampling error^2 + 
> measurement error^2) and so assess better mean groups differences?

Hoi K. Suen, Ed. D.
Distinguished Professor
Educational Psychology
Penn State
Website: suen.ed.psu.edu <http://suen.ed.psu.edu>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20070604/3e374d2a/attachment.html 

More information about the Rasch mailing list