The paper by Newell and Simon (1976) for next Thurs. focusses largely on computer science issues (what AI is, etc) and less on human beings. But the quotes below by Newell should help give you an idea where Newell stands on human cognition vis-a-vis artificial intelligence. You will find that he almost defines the `strong AI' position as described by Searle (1980).
This paper was written for the first national conference on Cognitive Science, a kind of keynote address. So Newell was trying to recapitulate what is known about human cognition and to justify the collaboration of subgroups of psychologists, computer scientists, linguistics, philosophers, etc in this newly-defined discipline.
Quotes from
Newell, A. (1980)`Physical Symbols Systems' Cognitive Science 4,
135-183.
(NB. I have inserted extra paragraph breaks to direct attention to
certain remarks and inserted clarifying comments in [square brackets]
and put some phrases in bold face. Long dashes show longer
breaks in the text.)
*****************************************
In my own estimation, the most fundamental contribution so far of artificial intelligence and computer science [to cognitive science] ... has been the notion of a physical symbol system. This concept of a broad class of systems that is capable of having and manipulating symbols, yet is also realizable within our physical universe, has emerged from our growing experience and analysis of the computer and how to program it to perform intellectual and perceptual tasks.
The notion of symbol that it defines is internal to this concept of a system. Thus, it is a hypothesis that these symbols are in fact the same symbols that we humans have and use every day of our lives.
Stated another way, the hypothesis is that humans are instances of physical symbol systems, and, by virtue of this, mind enters into the physical universe. (136)
- - - - -
In my own view, the hypothesis sets the terms on which we search for a scientific theory of mind. What we all seek are the further specifications of physical symbol systems that constitute the human mind or that constitute systems of powerful and efficient intelligence.
The physical symbol system is to our enterprise what the theory of evolution is to all biology, the cell doctrine to cellular biology, the notion of germs to the scientific concept of disease, the notion of tectonic plates to structural geology. (136)
- - - - -
The notion of a physical symbol system has been emerging throughout the quarter century of our joint enterprise [of cognitive science] - always important, always recognized, but always slightly out of focus as the decisive scientific hypothesis that is has now emerged to be (137).
[As evidence that the notion is `out of focus'] recall the rhetoric of the fifties, where we [cognitive scientists like Newell and Simon] insisted that computers were symbol manipulation machines and not just number manipulations machines....The great thing about computers, we argued, was that computers could take instructions and it was incidental, though useful, that they dealt with numbers. It was the same fundamental point about symobls, but our aim was to revise opinions about the computer, not about the nature of mind.
Another instance [demonstrating our uncertainty about symbol processing] is our ambivalence toward list processing languages. Historically, these have been critically important in abstracting the concept of symbol processing, and we have certainly recognized them as carriers of theoretical notions. Yet, we have also seen them as nothing but programming languages, ie, as nothing but tools. The reason why AI programming continues to be done almost exclusively in list processing languages is sought in terms of ease of programming, interactive style and what not. That Lisp is a close approximation to a pure symbol system is often not accorded the weight it deserves. (138)
***************************************
One can see here a strong committment to an apriori, Platonic essence -- an almost spiritual notion of ``the theory of information processing'' (137). Something much broader and more general than mere AI, but an insight about information processing by humans or whoever that justifies importing a particular set of conceptual tools to any relevant problem. Similar concepts underlie Chomsky's assumptions, as well, I believe.
Bob Port , Sept/99