[Rasch] Solving longitudinal puzzles with Rasch?
Iasonas Lamprianou
liasonas at cytanet.com.cy
Sun Dec 21 03:53:57 EST 2014
Thank you Carolyne and Rense
I will try the many-facets idea. Essentially, because I have a large (huge!) sample size, there should not be disjoint subsets. However, this assumes that all pupils progress equally across time (from month to monht). However, some pupils will progress at a different pace, therefore, I expect a differential month functioning (an interaction between the facets of pupil and month). How can I model this in Facets? Any ideas? According to the manual, it seems that we can model the "bias", which is will show the differential progress of each pupil compared to the average progress. But will I have huge misfit problems? I will run the analysis and I will know (although it typically takes between 6-8 hours, due to the large sample size)!
Jason
----- Original Message -----
From: "Rense Lange" <rense.lange at gmail.com>
To: rasch at acer.edu.au
Sent: Friday, 19 December, 2014 3:14:31 PM
Subject: Re: [Rasch] Solving longitudinal puzzles with Rasch?
Well, the problem precisely is that we know that there is no sample invariance - the basic problem is that the students get better over time (we hope). So, there is no single student parameter and if we give item subsets 1 and 2 on two different occasions, the difference between 1 and 2 is exaggerated (to be sure: items “spacing” within each set might well be invariant - but NOT between, in this case).
Typically, I treat the same students as different ones if they are tested twice (no items are ever repeated). Of course, if you have enough students across months etc then you could use the facets program while using month/quarter/year as an extra facet (assuming that month is the smallest available unit, at a minimum you’d have students, items, and months). If you don’t either of these, I have found that great distortions occur.
Rense Lange
On Dec 18, 2014, at 2:17 PM, Fidelman, Carolyn < Carolyn.Fidelman at ed.gov > wrote:
Isn't this simply the principle of "sample invariance"? One of the strengths and cornerstones of IRT.
-----Original Message-----
From: rasch-bounces at acer.edu.au [ mailto:rasch-bounces at acer.edu.au ] On Behalf Of Iasonas Lamprianou
Sent: Monday, December 15, 2014 1:18 AM
To: rasch at acer.edu.au
Subject: Re: [Rasch] Solving longitudinal puzzles with Rasch?
Thank you Mike,
indeed, interesting idea. I will try to work along those lines, and I hope I will get something useful out of it. Any additional ideas, anyone?
Jason
----- Original Message -----
From: "Mike Linacre" < mike at winsteps.com >
To: rasch at acer.edu.au
Sent: Friday, 12 December, 2014 10:03:25 AM
Subject: Re: [Rasch] Solving longitudinal puzzles with Rasch?
Thank you for asking, Iasonas.
You wrote: "This has the problem that the students are not really "different" and there should be a lot of collinearity (dependence)."
This is an unease felt by many statisticians when faced with time-series data. So, let's calibrate the unease by considering the best case and the worst case situations.
Students do not repeat tests, but perhaps items on the tests are classified by content area. Let's imagine so, and give each student stronger and weaker content areas.
Best case: the students change so much every two months that they are statistically unrecognizable. They usually have slightly higher overall ability each time. Their stronger and weaker content areas change each time. They become "new" students.
Worst case: the students do not change at all! We are testing the same students again at the same ability levels. Their stronger and weaker content areas do not change.
For convenience, let's imagine that the administration of tests to each student is the same in both situations, so that the total number of administrations of each item is the same in both situations.
Let's speculate: if we were shown only the item statistics for the two situations (without being told which is which), what differences would we see?
Here's a speculation: the student abilities in the best case increase during the year, so there will be relatively more successes later in the year. The item p-values in the best case will be higher than the item p-values in the worst case. But the "new" student abilities are also increasing in the best case, so higher p-values do not mean easier items in the Rasch sense. The two sets of item difficulties will be almost the same! If we were only shown the two sets of item difficulties, we would not know which is which (or have I missed something?)
Everyone: any thoughts or speculations (or data simulations) to help Iasonas?
Mike L.
On 12/12/2014 14:29 PM, Iasonas Lamprianou wrote:
<blockquote>
Dear all,
I need to solve a longitudinal puzzle. I would love to use Rasch (if it is the most appropriate tool). My post is long, but my puzzle is complex!
I have data from a computerized test. The students were allowed to log in whenever they wanted to take any number of short tests.Each test had 3-7 questions. Each test consists of different questions. There is no cosnistent pattern as to which tests were completed by the students (i.e. some students completed test A first but others would complete test Z first). The tests are not of the same difficulty. The items within a test are not of the same difficulty. The tests/items are not calibrated. There are many thousands of students and tens of tests (=hundreds of items). The teachers have a vague idea of the difficulty of each test, so they tried to match the difficulty of the test with the ability of the students. But of course, as I said, the tests are not calibrated (so the teachers were not really sure how difficult each test was), and they did not really have precise measures of the students ability (but of course they knew their students). This practice lasted for a w
hole year. Some students were more industrious, so they used log in every week (any time during the month/year) and they used to take a test. Others logged in once a month; and others only logged in once and took only one test. Overall, the students have taken on average 4-5 tests (=15-20 items), at random time points across the year. However, the ability of the students changed across the year. My question is how can I use (if I can) the Rasch model to analyze the data? In effect, my aim is: (a) to calibrate all the tests/items so that I can have an item bank, and (b) estimate student abilities at the start and end of the year (wherever possible) to measure progress. I am ready to assume that item difficulties do not change (we do not alter the items) but student abilities do change (hopefully improve) across time.
<blockquote>
I am not sure if this puzzle can be solved using Rasch models. I thought that I could split the year in intervals of, say, 2 months.
</blockquote>
Assume that the ability of each person during those two months is more or less the same. Also assume that each person is a different version of itself in the next two months. Then assume that item difficulties are fixed. Then run the analysis with six times the number of students (each two months the student "changes"). This has the problem that the students are not really "different" and there should be a lot of collinearity (dependence).
<blockquote>
Any idea will be values and considered to be significant.
</blockquote>
________________________________________
Rasch mailing list
email: Rasch at acer.edu.au
web: https://mailinglist.acer.edu.au/mailman/options/rasch/liasonas%40cytanet.com.cy
________________________________________
Rasch mailing list
email: Rasch at acer.edu.au
web: https://mailinglist.acer.edu.au/mailman/options/rasch/carolyn.fidelman%40ed.gov
________________________________________
Rasch mailing list
email: Rasch at acer.edu.au
web: https://mailinglist.acer.edu.au/mailman/options/rasch/rense.lange%40gmail.com
</blockquote>
________________________________________
Rasch mailing list
email: Rasch at acer.edu.au
web: https://mailinglist.acer.edu.au/mailman/options/rasch/liasonas%40cytanet.com.cy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20141220/ce065331/attachment.html
More information about the Rasch
mailing list