[Rasch] Solving longitudinal puzzles with Rasch?
Iasonas Lamprianou
liasonas at cytanet.com.cy
Mon Dec 15 17:18:09 EST 2014
Thank you Mike,
indeed, interesting idea. I will try to work along those lines, and I hope I will get something useful out of it. Any additional ideas, anyone?
Jason
----- Original Message -----
From: "Mike Linacre" <mike at winsteps.com>
To: rasch at acer.edu.au
Sent: Friday, 12 December, 2014 10:03:25 AM
Subject: Re: [Rasch] Solving longitudinal puzzles with Rasch?
Thank you for asking, Iasonas.
You wrote: "This has the problem that the students are not really
"different" and there should be a lot of collinearity (dependence)."
This is an unease felt by many statisticians when faced with time-series
data. So, let's calibrate the unease by considering the best case and
the worst case situations.
Students do not repeat tests, but perhaps items on the tests are
classified by content area. Let's imagine so, and give each student
stronger and weaker content areas.
Best case: the students change so much every two months that they are
statistically unrecognizable. They usually have slightly higher overall
ability each time. Their stronger and weaker content areas change each
time. They become "new" students.
Worst case: the students do not change at all! We are testing the same
students again at the same ability levels. Their stronger and weaker
content areas do not change.
For convenience, let's imagine that the administration of tests to each
student is the same in both situations, so that the total number of
administrations of each item is the same in both situations.
Let's speculate: if we were shown only the item statistics for the two
situations (without being told which is which), what differences would
we see?
Here's a speculation: the student abilities in the best case increase
during the year, so there will be relatively more successes later in the
year. The item p-values in the best case will be higher than the item
p-values in the worst case. But the "new" student abilities are also
increasing in the best case, so higher p-values do not mean easier items
in the Rasch sense. The two sets of item difficulties will be almost the
same! If we were only shown the two sets of item difficulties, we would
not know which is which (or have I missed something?)
Everyone: any thoughts or speculations (or data simulations) to help
Iasonas?
Mike L.
On 12/12/2014 14:29 PM, Iasonas Lamprianou wrote:
> Dear all,
> I need to solve a longitudinal puzzle. I would love to use Rasch (if
it is the most appropriate tool). My post is long, but my puzzle is complex!
> I have data from a computerized test. The students were allowed to
log in whenever they wanted to take any number of short tests.Each test
had 3-7 questions. Each test consists of different questions. There is
no cosnistent pattern as to which tests were completed by the students
(i.e. some students completed test A first but others would complete
test Z first). The tests are not of the same difficulty. The items
within a test are not of the same difficulty. The tests/items are not
calibrated. There are many thousands of students and tens of tests
(=hundreds of items). The teachers have a vague idea of the difficulty
of each test, so they tried to match the difficulty of the test with the
ability of the students. But of course, as I said, the tests are not
calibrated (so the teachers were not really sure how difficult each test
was), and they did not really have precise measures of the students
ability (but of course they knew their students). This practice lasted
for a whole year. Some students were more industrious, so they used log
in every week (any time during the month/year) and they used to take a
test. Others logged in once a month; and others only logged in once and
took only one test. Overall, the students have taken on average 4-5
tests (=15-20 items), at random time points across the year. However,
the ability of the students changed across the year. My question is how
can I use (if I can) the Rasch model to analyze the data? In effect, my
aim is: (a) to calibrate all the tests/items so that I can have an item
bank, and (b) estimate student abilities at the start and end of the
year (wherever possible) to measure progress. I am ready to assume that
item difficulties do not change (we do not alter the items) but student
abilities do change (hopefully improve) across time.
> I am not sure if this puzzle can be solved using Rasch models. I
thought that I could split the year in intervals of, say, 2 months.
Assume that the ability of each person during those two months is more
or less the same. Also assume that each person is a different version of
itself in the next two months. Then assume that item difficulties are
fixed. Then run the analysis with six times the number of students (each
two months the student "changes"). This has the problem that the
students are not really "different" and there should be a lot of
collinearity (dependence).
> Any idea will be values and considered to be significant.
________________________________________
Rasch mailing list
email: Rasch at acer.edu.au
web: https://mailinglist.acer.edu.au/mailman/options/rasch/liasonas%40cytanet.com.cy
More information about the Rasch
mailing list