• Back to Profile

  • I propose to consider the question, 'Can machines think?'
    This should begin with definitions of the meaning of the terms 'machine' and 'think'.
    The definitions might be framed so as to reflect so far as possible the normal use of words,
    but this attitude is dangerous.

    If the meaning of the words 'machine' and 'think' are to be found
    by examining how they are commonly used
    it is difficult to escape the conclusion that the meaning
    and the answer to the question, 'Can machines think?'
    is to be sought in a statistical survey such as a Gallup poll. But this is absurd.
    - A. M. Turing's (1950, p. 433) 'Computing Machinery and Intelligence'

    Acting Humanly







    1. Q1: Can machines think?
    2. The thinking rationally approach presupposes that there can be a correct response of what it means to think rationally
    3. The thinking like a human approach presupposes that there can be a correct response to the question of what it means to think like human beings


    4. According to Turing (1950), Q1 is too ambiguous
    5. We end up in a debate about intelligence and what it means to be really thinking

    6. Instead, we should swap Q1 with Q2:
    7. Q2: Can a machine be constructed to pass a behavioral test for human intelligence satisfactorily?


    Swapping Q1 with Q2


    1. The acting humanly approach is concerned with whether machines are capable of producing behaviour that we would say required thinking in human beings
    2. The acting humanly approach is concerned with whether machines can act humanly

    3. The acting humanly approach is supported by:
      1. The Behaviorist Conception of Intelligence
      2. The Turing Test or TT
      3. The Turing Syllogism




    Alan Turing



    According to the Behaviorist Conception of Intelligence:
    1. We base our attributions of intelligence on behavioral tests or behavioral criteria
    2. A reliable behavioral disposition to carry out a decent conversation for a certain time interval gives us good grounds to attribute intelligence to the bearer of such a disposition




    According to the Turing Test or TT:
    A machine is in the 1st room
    A person is in the 2nd room
    A human judge is in the 3rd room

    TT

    Both the machine and the person respond by teletype to remarks made by a human judge in the 3rd room for some fixed period of time (e.g. an hour)
    The machine passes the TT just in case the judge cannot tell which are the machine's answers and which are those of the person



    The Loebner Prize



    According to the Turing Syllogism:
    P1: If an entity passes the TT, then it produces a sensible sequence of verbal responses to a sequence of verbal stimuli.
    P2: If an entity produces a sensible sequence of verbal responses to a sequence of verbal stimuli, then it is intelligent. — see the Behaviorist Conception of Intelligence
    C: ∴ If an entity passes the TT, then it is intelligent.


    Formally:
    1. P1: p → q
    2. P2: q → r
    3. C: ∴ p → r


    Most simplifications of the TT involve attempting to have a computer in the other room fool an interrogator into believing the computer is a human
    Hugh Loebner's Grand Prize of $100,000 and a Gold Medal have been reserved for the 1st computer whose responses are indistinguishable from a human's



    Joseph Weizenbaum



    EXAMPLE 1 of a conversational program designed in accordance with the acting humanly approach: ELIZA

    ELIZA was coded at MIT by Joseph Weizenbaum in the 1960s
    ELIZA was a very simple program: it was constituted by 200 lines in BASIC
    ELIZA could imitate a psychotherapist by employing a small set of strategies
    In the guise of a psychotherapist, ELIZA could adopt the pose of knowing almost nothing of the real world

    Joseph Weizenbaum with ELIZA



    Input:

    Adapted from the js code for ELIZA by George Dunlop: http://psych.fullerton.edu/mbirnbaum/psych101/eliza.htm




    Key:
    'H' denotes 'Human'
    'P' denotes 'PARRY'



    EXAMPLE 2 of a conversational program designed in accordance with the acting humanly approach: PARRY

    PARRY was coded at Stanford by Kenneth Colby in the 1960s
    PARRY could imitate a paranoid schizophrenic
    Expert psychiatrists were unable to distinguish PARRY's ramblings from those of human paranoid schizophrenics
    Colby described PARRY as 'ELIZA with attitude'

    Vince Cerf (one of the fathers of the Internet)

    In 1972, Vint Cerf set up a conversation between ELIZA (based at MIT) and PARRY (based at Stanford)
    Cerf used ARPANET, an early packet switching network that later became the technical foundation of the Internet

    Diagram of ARPANET




    Eugene Goostman



    EXAMPLE 3 of a conversational program designed in accordance with the acting humanly approach: Eugene Goostman

    Eugene Goostman was a chatbot developed by three programmers, Vladimir Veselov, Eugene Demchenko, and Sergey Ulasen
    In 2014, Eugene Goostman was supposed to have passed the TT by fooling 33% of the judges into thinking it was human
    2014 marked the 60th anniversary of Alan Turing's death
    Eugene Goostman could imitate a 13-year-old Ukrainian boy
    In the guise of a 13-year-old Ukrainian boy, Eugene Goostman could get away with its grammatical errors and lack of general knowledge

    Have a conversation with Eugene Goostman: http://eugenegoostman.elasticbeanstalk.com/