I propose to consider the question, 'Can machines think?'
This should begin with definitions of the meaning of the terms 'machine' and 'think'.
The definitions might be framed so as to reflect so far as possible the normal use of words,
but this attitude is dangerous.
If the meaning of the words 'machine' and 'think' are to be found
by examining how they are commonly used
it is difficult to escape the conclusion that the meaning
and the answer to the question, 'Can machines think?'
is to be sought in a statistical survey such as a Gallup poll. But this is absurd.
 A. M. Turing's (1950, p. 433) 'Computing Machinery and Intelligence'
Thinking like a Human 





Recall the Physical Symbol System Hypothesis or PSSH (Newell & Simon, 1976): A physical symbol system has the necessary and sufficient means for intelligent action The 2 most important classes of physical symbol systems with which we are acquainted are human beings and computers If the PSSH is true, then there must exist a complete description of cognitive processing at the symbolic level However, no such description exists According to the thinking like a human approach: To give a full account of mental processes and operations, one must instead invoke processes that lie beneath the symbolic level According to the Subsymbolic Hypothesis or SSH (Smolensky, 1987): Let an intuitive processor denote a machine that runs programs responsible for behaviour that is not conscious rule application A precise and complete formal description of the intuitive processor does not exist IMPLICATIONS of the SSH: The intuitive processor is a subconceptual connectionist system The intuitive processor operates at an intermediate level between the neural level and the symbolic level Connectionist systems are much closer to neural networks than symbolic systems 

Recall that the basic units of the thinking rationally approach are propositions, about which persons have propositional attitudes — see Bringsjord's (2008) Logicist Manifesto The thinking rationally approach is concerned with the laws of thought, the mind, and its mental operations Conversely, the basic units of the thinking like a human approach are neurons Neurons are the basic working units of the brain The thinking like a human approach is concerned with the brain As real neurons are exceedingly complex, the aim of the thinking like a human approach is to model our understanding of neurons in a computationally feasible manner

According to the McCullochPitts Model of the Neuron (McCulloch & Pitts, 1943, 1947):



According to the Learning Rule of Synaptic Reinforcement (Hebb, 1949): A synapse is a structure that permits a neuron to transmit an electrical or chemical signal to another neuron x_{i} denotes the output of the input (presynaptic) cell y_{i} denotes the output of the output (postsynaptic) cell w_{11} denotes the synaptic weight from x_{1} to y_{1} More generally, w_{ij} denotes the synaptic weight from x_{i} to y_{j} Δw_{ij} denotes the strength of the change in synaptic weight from x_{i} to y_{j} When neuron x_{i} (presynaptic) fires, followed by neuron y_{j} (postsynaptic) firing, the synapse between x_{i} and y_{j} is strengthened ∴ Δw_{ij} will be positive Neurons that fire together, wire together 

The perceptron was an algorithm that could learn to associate inputs with outputs (Rosenblatt, 1958, 1962) The perceptron incorporated the following:
Let the bias of the perceptron be denoted by b Let 'x' and 'o' represent patterns with a set of values { x_{1}, x_{2} } Let w_{1} and w_{2} denote the associated weights of x_{1} and x_{2} The perceptron could make correct classifications of patterns by virtue of:
IMPLICATIONS:


Argument (Minsky & Papert, 1969):
IMPLICATIONS of this argument: There are some patterns (including extremely simple ones like the XOR logic function) that no perceptron could learn This is known as the linear separability problem Let white dots (•) denote the class of x_{1} ⊻ x_{2} bearing the truth value of 1/T Let black dots (•) denote the class of x_{1} ⊻ x_{2} bearing the truth value of 0/F The two classes (represented by • and •) cannot be separated by a single line 

According to Backpropagation (Rumelhart, Hinton, & Williams, 1986):
Defenders of the thinking like a human approach will recommend more complex (e.g. multilayered) networks and more complex transfer functions (e.g. multistep as opposed to linear step functions) 