[Rasch] Not a Fan of Lexiles?

Paul Barrett pbarrett at hoganassessments.com
Fri Nov 2 07:41:41 EST 2007




________________________________

	From: Trevor Bond [mailto:trevor.bond at jcu.edu.au] 
	Sent: Thursday, November 01, 2007 1:34 PM
	To: Lang, William Steve; Rense; Paul Barrett; rasch at acer.edu.au
	Subject: RE: [Rasch] Not a Fan of Lexiles?
	
	

		It reveals beautifully when raw scores don't add up.
Mean scores for patients and their proxies on the Dysexecutive
Questionnaire (DEX) provided another confirmation of previous findings:
patient and proxies yield the same total scores. Clinicians using the
DEX expect that a positive difference between proxy (e.g. family) rating
totals and patient self-rating totals implicates dysexecutive function.
The Chan and Bode (Rasch) analysis reveals mis-fit and bi-directional
DIF.
		

Hello Trevor
 
In general, if you wish to propose a relationship between a set of
symptoms or composite attribute diagnosis (dysexecutive function) and
magnitudes of anything else (whether discrepancy measure or some other
set of magnitudes), all you do is map the magnitudes onto the diagnosis
"indicator" as a conditional rate function or tabulation. From what you
are saying, there were no discrepancies observed between peer and
self-report ratings on the DEX. 
 
Given a strong theory or set of expectations that peer and self-report
ratings should differ - clearly something was amiss with the DEX, or the
manner in which dysexecutive function criteria were being defined. So,
the perceptive investigator would drill down to the item level and take
a hard look at the items - both empirically as well as from a
theory/symptomology perspective - and attempt to figure out why the
problem was occurring.
 
The difference between me and the Rasch investigator at this point is:
 
In my world, no strong assumption is made about the quantitative
structure of a latent variable called "Dysexecutive Function". So, I
pick through the data while carefully reconsidering the veracity of
those "theoretical expectations". I make no assumptions about
dimensionality - but merely seek to maximize the discrepancy vs actual
functional/behavioral deficit cross-validated predictive accuracy "by
any means possible". It is the predictive accuracy or relevant criteria
which defines the success of my work.
 
In the Rasch world, everything is done "by the numbers" - with a
whopping assumption that Dysexecutive Function is a unidimensional
quantitatively structured variable identical to THE latent variable
constructed by the Rasch model. Success is regarded perhaps as more a
function of model fit and obtaining expected peer-self score
discrepancies, irrespective whether these maximize the predictive
accuracy of that discrepancy measure.
 
In the end, it would be the predictive accuracy of either approach which
would/should be the final arbiter of success.
 
So, when you say "new insights revealed because the model and software
encouraged a different way of thinking", I can only agree
wholeheartedly. It did "the business" by all accounts so in this case it
was clearly a success. 
 
But I wonder whether my more "examine those expectations & predictive
accuracy maximization approach" would have achieved even greater utility
without so many assumptions being made about the structure of the data? 
 
Once you treat data models as "optional", and focus on the problem to be
solved rather than forcing a single data model to be applied to it, all
kinds of analysis and solution options open up. 
 
But, I guess if your primary goal, or your perception of a
problem-solution is to create a quantitatively structured latent
variable, then a data-model approach is mandatory, with the appropriate
indicators of model fit.
 
And that opens up the wider issue - the status of data models themselves
(Breiman, L. (2001) Statistical Modeling: the two cultures. Statistical
Science, 16, 3, 199-231). 
 
I find these discussions fascinating for my own thinking at least.
 
Regards ... Paul
 
 
 

 

Paul Barrett, Ph.D.

2622 East 21st Street | Tulsa, OK  74114

Chief Research Scientist

Office | 918.749.0632  Fax | 918.749.0635

pbarrett at hoganassessments.com

      

hoganassessments.com <http://www.hoganassessments.com/> 

 

 

 

 

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20071101/6f2d6b41/attachment.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/bmp
Size: 29886 bytes
Desc: att87117.bmp
Url : https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20071101/6f2d6b41/attachment.bmp 


More information about the Rasch mailing list