[Rasch] assessing construct validity over time

Alexander Freund pafreund at uni-muenster.de
Wed Jul 25 17:39:14 EST 2007


Dear Trevor,


thank you very much for your response!
Here is a little more information:

/What is the nature of the construct you are validating?
>From whence do the items come?
What do they instantiate?/

The test items are all newly developed figural reasoning items, and they are used for measuring genaral intelligence. 


/Why are they written / arranged like that?
Is there an expected/hierarchical arrangement of difficulty?
/
I have a series of experiments investigating possible retest/practice effects on these items, and I'm manipulating
parameters such as identical vs. parallel items or power vs. timed testing conditions. There is no arrangement of 
expected difficulty, the order of the items is random.


/Then
Do all 30 items fit the model when administered at once?
/
Yes they do. And from that I would conclude that the construct measured with these items doesn't change over time and because of 
practice.
 

/Checking the invariance of the estimates can be done a number of ways:
Chapter B&F 2nd ed shows how and provides an XL spreadsheet on the CD
/
As I want to assess construct validity, I'm now actually not interested in whether the difficulty parameter estimates change
(they do and are expected to do that anyway, and I'm using a different methodology (multilevel logistic regresion) to model this). 


/15 items will say precious little about your kids (large SEs)
What willl your sample size be?
What will be the distribution of the latent trait in that sample/
/
The sample size is 100 (because of the nature of the study), but measuring person ability is also not too important. 
The main question is does the construct change with repeated administrations of these items? What I need would be some confirmative 
information that this simple approach is legitimate, or a hint what else I should do. But your information already does help a lot!


/Perhaps you should look at Sam Messick on construct validity...
/
Again, thanks a lot, I'll try to get



>/Dear Alexander,
/>/I have a question I hope I can get a helpful answer to.
/>/I'm investigating retest effects for an intelligence test 
/>/(dichotomous scoring) over 4 test administrations. I would also like 
/>/to check if the construct measured with my items changes over time. 
/>/At every time point, I have sets of 15 items, and I'm using the same 
/>/15 items at points 1 and 3, and a set of item isomorphs at points 2 
/>/and 4. So essentially, every subject delivers 60 answers, and 2 
/>/answers each for 30 items, respectively. My question is how can I 
/>/use Rasch methodology to assess the construct validity of these 
/>/items over the 4 time points? Is it legitimate to simply try to fit 
/>/one Rasch model to all 60 items? Or what can be done in this case? I 
/>/would be very thankful for any advice.
/

/Hope that helps.
Best
Trevor/
-- 
Trevor G BOND Ph D
Professor and Head of Dept
Educational Psychology, Counselling & Learning Needs
D2-2F-01A EPCL Dept.
Hong Kong Institute of Education
10 Lo Ping Rd, Tai Po
New Territories HONG KONG

Voice: (852) 2948 8473
Fax:  (852) 2948 7983
Mob:

-- 
Dipl.-Psych. A. Freund
Psychologisches Institut IV
Fliednerstraße 21
D-48149 Münster
+ 49 251 83 34153
pafreund at uni-muenster.de




More information about the Rasch mailing list