[Rasch] Fan

Stephen Humphry stephen.humphry at uwa.edu.au
Wed Oct 31 14:53:47 EST 2007


Michael, I agree AI is likely a problem that is far too complicated at this
time in history. I am yet to see a compelling argument that we are likely to
create genuine artificial intelligence before the successful creation of
artificial life. There is no intelligence without life in nature, given any
reasonable definition of 'intelligence'. Intelligence emerged long after
life did, and possibly billions of years afterward, depending upon
definitions and criteria. Certain complexity theorists have made precisely
this point if I recall correctly.

I'm interested to know what "wow" factor (as Paul put it) arises from neural
networks. They've created a lot of interest, but it is hype as far as I'm
concerned. I would level precisely the same criticism at this field --
nothing has been achieved as a result of work on neural networks that would
not have been otherwise achieved with the emergence of information
technologies.

Cheers,

Steve.


-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
Of commons
Sent: Wednesday, 31 October 2007 8:14 AM
To: 'Paul Barrett'; rasch at acer.edu.au
Subject: RE: [Rasch] Fan

The problem with neural networks is that they are part of AI.  They take on
way to complicated problems and therefore have lots of problems.  They are
unstable, they have to be fooled with to get them to converge.  In animals,
evolution has solved these problems.  Also, stacked neural networks are much
closer to nature and can solve much more complicated problems.

My best,

Michael Lamport Commons, Ph.D.
Assistant Clinical Professor
Department of Psychiatry
Harvard Medical School
Beth Israel Deaconess Medical Center
234 Huron Avenue
Cambridge, MA 02138-1328
commons at tiac.net
http://www.dareassociation.org/
617-497-5270 Telephone
617-320-0896  Cellular
617-491-5270  Facsimile

-----Original Message-----
From: rasch-bounces at acer.edu.au [mailto:rasch-bounces at acer.edu.au] On Behalf
Of Paul Barrett
Sent: Tuesday, October 30, 2007 7:40 PM
To: rasch at acer.edu.au
Subject: RE: [Rasch] Fan



----Original Message-----
> From: rasch-bounces at acer.edu.au
> [mailto:rasch-bounces at acer.edu.au] On Behalf Of Moritz Heene
> Sent: Wednesday, October 31, 2007 8:47 AM
> To: rasch at acer.edu.au
> Subject: Re:[Rasch] Fan
> 
> Hello to all and hello especially to Paul,
> 
...
> I am not expert in neural networks, I am just a beginner, but as far 
> as I can see, neural networks, as efficient as they are, have still 
> their problems.

Hello Moritz

That was some response!

I'm only replying to this bit quickly to note you missed my point!
Neural nets, like all Machine Learning algorithms, are prone to many
"threats to their validity" such as overlearning, lack of cross-validation,
and sometimes it's really difficult to track back as to how they achieved
their outputs (!) .. 

But - I wanted to stress the impact this one innovation had in many areas -
unlike the impact of modern test theory which was developed around the same
time - and note why it was not taken up so quickly by those who utilize test
scores for explanatory or predictive purposes.

As to Andrew's point about back-propogation - again, it was the concept of a
network and how such a concept might be causal for human neural development
and activity which took several areas by storm. 

Whether or not neural nets survive beyond the next developments in AI is of
course a moot point.

And thanks for the detailed response on the Lexile system Andrew - very
valuable.

Regards .. Paul


 
_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
http://mailinglist.acer.edu.au/mailman/listinfo/rasch

_______________________________________________
Rasch mailing list
Rasch at acer.edu.au
http://mailinglist.acer.edu.au/mailman/listinfo/rasch





More information about the Rasch mailing list