[Rasch] More on the Rasch model

Paul Barrett paul at pbarrett.net
Fri Nov 2 07:28:11 AEDT 2018

I note this statement which appears in the abstract of the article:

“which is taken to be a measurement on an interval scale with an arbitrary
origin and unit.”


Coupled with no recognition of, or even a response to Joel Michell’s:

“Now, if a person’s correct response to an item depended solely on ability,
with no random ‘error’ component involved, one would only learn the ordinal
fact that that person’s ability at least matches the difficulty level of the
item. Item response modellers derive all quantitative information (as
distinct from merely ordinal) from the distributional properties of the
random ‘error’ component. If the model is true, the shape of the ‘error’
distribution reflects the quantitative structure of the attribute, but if
the attribute is not quantitative, the supposed shape of ‘error’ only
projects the image of a fictitious quantitivity. Here, as elsewhere,
psychometricians derive what they want most (measures) from what they know
least (the shape of ‘error’) by presuming to already know it.”  

Michell, J. (2004). Item Response Models, pathological science, and the
shape of error. Theory and Psychology, 14, 1, 121-129.


After the latest paper from Gunter Trendler, I’m surprised if anyone will in
future think of Rasch ‘measurement’ as anything more than another useful
pragmatic tool for scaling item responses, nothing special but is useful all
the same for ‘good enough’ assessment (not ‘measurement as a quantity) of a
psychological attribute. It’s not the math that’s at fault, but the
presumption that scaling item responses can somehow ‘create’ a psychological


Trendler, G. (2018). Conjoint measurement undone. Theory and Psychology
(http://journals.sagepub.com/doi/abs/10.1177/0959354318788729 ), In Press, ,


According to classical measurement theory, fundamental measurement
necessarily requires the operation of concatenation qua physical addition.
Quantities which do not allow this operation are measurable only indirectly
by means of derived measurement. Since only extensive quantities sustain the
operation of physical addition, measurement in psychology has been
considered problematic. In contrast, the theory of conjoint measurement, as
developed in representational measurement theory, proposes that the
operation of ordering is sufficient for establishing fundamental
measurement. The validity of this view is questioned. The misconception
about the advantages of conjoint measurement, it is argued, results from the
failure to notice that magnitudes of derived quantities cannot be determined
directly, i.e., without the help of associated quantitative indicators. This
takes away the advantages conjoint measurement has over derived measurement,
making it practically useless.  


Ah well, a bit like SEMmers after Mike Maraun’s devastating critique of
so-called “latent variables’ ... a group of ‘stunned mullets’ comes to mind

 enduring beliefs are not easily set aside!

Maraun, M.D. (2007). Myths and Confusions.
http://www.sfu.ca/~maraun/myths-and-confusions.html , , , 0-0. [open-access
book, pdf chapters]


Part 2: The Central Account is Mythology

• VI Introduction

• VII The myth of the latent variable model as detector

• VIII The myth of unobservability 211 

• IX Latent variable interpretation and the deeper problem

•  X The myth that latent variable models are models

•  XI Rebuttals and comments


To give you a taste: - from the beginning of the Rebuttals and comments

1. Introduction

The Central Account is an ürbild. By their very natures as world views,
ürbilds feel right. If they contain nonsense, the nonsense feels right, and
so escapes critical appraisal. It is humans that construct ürbilds, and it
is human nature to defend them. The following are a sampling of defenses of
the Central Account that I have encountered.

2. "But scientists find the Central Account useful"

Scientists who employ latent variable modeling technology do not standardly
consider the issue as to whether or not the Central Account is useful, for
it is an ürbild, and is presupposed in their work. The CA is not an aid to
scientific work, but the very lenses through which the researcher see his
latent variable modeling. The CA provides the means by which results are
interepreted, and a common language with which to discuss the products of
latent variable modeling. But even if applied researchers and
psychometricians were aware that their employment of the CA represented a
philosophical commitment, it would be fatuous to use their claims that they
find the CA to be "useful" as grounds for retaining it. Such a defense would
be akin to attempting to defend phrenology by noting that "individuals plan
their lives according to the output from phrenological analyses."
Presumably, the fundamental aim of the social and behavioural sciences is to
arrive at a correct account of that segment of natural reality that is their
focus. If this is true, then the only relevant issue in regard the use of
latent variable modeling technology within these disciplines is whether or
not it leads to correct conclusions about natural reality. However, as it is
currently employed, correct conclusions cannot be made because these
conclusions are expressed in terms of a mythology, and mythologies bear not
on natural reality.   


Never mind; the above is entirely rhetorical. 


But it just struck me as odd that someone would bother fiddling around with
a few statistical concepts while the above issues remain unanswered. But
then, who among those on this list has read and digested:

Richters, J.E. (1997). The Hubble hypothesis and the developmentalist's
dilemma. Development and Psychopathology, 1, 2, 193-229
(http://cogprints.org/1009/ open-access) - and thought again about their
beliefs concerning ‘measurement’ within psychology?


Developmental psychopathology stands poised at the close of the 20th century
on the horns of a major scientific dilemma. The essence of this dilemma lies
in the contrast between its heuristically rich open system concepts on the
one hand, and the closed system paradigm it adopted from mainstream
psychology for investigating those models on the other. Many of the research
methods, assessment strategies, and data analytic models of psychology’s
paradigm are predicated on closed system assumptions and explanatory models.
Thus, they are fundamentally inadequate for studying humans, who are
unparalleled among open systems in their wide ranging capacities for
equifinal and multifinal functioning. Developmental psychopathology faces
two challenges in successfully negotiating the developmentalist’s dilemma.
The first lies in recognizing how the current paradigm encourages research
practices that are antithetical to developmental principles, yet continue to
flourish. I argue that the developmentalist’s dilemma is sustained by long
standing, mutually enabling weaknesses in the paradigm’s discovery methods
and scientific standards. These interdependent weaknesses function like a
distorted lens on the research process by variously sustaining the illusion
of theoretical progress, obscuring the need for fundamental reforms, and
both constraining and misguiding reform efforts. An understanding of how
these influences arise and take their toll provides a foundation and
rationale for engaging the second challenge. The essence of this challenge
will be finding ways to resolve the developmentalist’s dilemma outside the
constraints of the existing paradigm by developing indigenous research
strategies, methods, and standards with fidelity to the complexity of
developmental phenomena.   


Such an important paper - for what it explains about the nature of
‘psychological variables’ and the somewhat ‘adventurous’ but flawed approach
adopted by psychologists when it comes to matters of ‘measurement’ of such
attributes. Largely ignored as you might expect except among theoreticians
and investigative psychological scientists (rather than ersatz statisticians
and psychometricians).


A point made more forcefully in Michell’s powerful 2012 article:

Michell, J. (2012). Alfred Binet and the concept of heterogeneous orders.
Download link:

psyg.2012.00261/abstract [open-access]. Frontiers in Quantitative Psychology
and Measurement, 3, 261, 1-8.


But there I go again .. sorry! I’ll get back in my box!


Regards .. Paul


Chief Research Scientist

Cognadev Ltd.


W:  <http://www.pbarrett.net/> www.pbarrett.net 

E:  <mailto:paul at pbarrett.net> paul at pbarrett.net 

M: +64-(0)21-415625


From: Rasch <rasch-bounces at acer.edu.au> On Behalf Of David Andrich
Sent: Thursday, November 1, 2018 7:16 PM
To: rasch at acer.edu.au
Subject: [Rasch] More on the Rasch model


Colleagues might be interested in the paper which has just come out.


Andrich, D. & Pedler, P. (2019) A law of ordinal random error: the Rasch
measurement model and random error distributions of ordinal assessments.
Measurement. 131, 771–781. Reference:



In the paper we show an analogy between the Gaussian (Normal) distribution
as a random error distribution of replicated measurements and the Rasch
measurement model distribution for a polytomous item as a random error
distribution of ordinal assessments. The analogy is both in terms of the
mathematical properties as well as the general principles.


All the best





David Andrich, BSc MEd W.Aust., PhD Chic, FASSA 

Chapple Professor david.andrich at uwa.edu.au <mailto:david.andrich at uwa.edu.au>

  Graduate School of Education
The University of Western Australia
M428, 35 Stirling Highway, 

Crawley, Western Australia, 6009

Telephone: +61 8 6488 1085;   Fax: +61 8 6488 1052

www.matildabayclub.net <http://www.matildabayclub.net/> 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20181102/c5d08017/attachment-0001.html>

More information about the Rasch mailing list