Wednesday, September 12, 2012

Minds, Brains and Programs


Minds, Brains, and Programs
By: Dr. John R. Searle

This paper is arguing the long debate about Artificial Intelligence (AI) and whether or not man-made machines are fundamentally capable of intentionality.  Intentionality meaning the understanding of something past what it is, and more about what it embodies.  Dr. Searle’s example of the Chinese box explains that a machine that passes the Turing test does not imply that it has intentionality.  A machine that can take in Chinese characters and accurately give a response in Chinese does not imply that the machine understands Chinese like a Chinese speaking human does.  A person, that has no prior knowledge of Chinese, can also take in Chinese characters in the same way, and use the same algorithmic processes the machine uses to get the same answer without knowing what they mean.  Since the human does not understand Chinese, but can still deceive a Chinese speaking human into thinking they do, then the machine by comparison does not understand Chinese in the same way. 
The rest of the paper is Dr. Seale refuting responses to his argument about how man-made machines cannot achieve intentionality as a human mind can.  He explains that the human mind is intentional because our human brain is “causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality“.  We do not function as a computer does with algorithms which are only capable of running and create the information of the next algorithm. 
I believe that Dr. Seale makes very valid explanations as to why machines running off of formal processes can never be capable of intentionality.  They can only simulate understanding because the machine did not make the programs it uses and so has no real understanding of the contents of the programs and know way of ever knowing them.  Looking back at the Chinese box, consider why the human did not retain any intentionality about the Chinese language even though humans are fully capable of intentionality.  The algorithms did not teach them anything that would aid in understanding the Chinese language.  The million dollar question I think of when considering this is how exactly do humans obtain intentionality about something?  That is what is needed to know before any human can possibly make a man-made machine do likewise.  In Dr. Seale’s definition of understanding, how do humans come to the conclusion that they understand something? 

No comments:

Post a Comment