Alan Turing's Idea
Strangely, the philosophy of artificial intelligence precedes artificial intelligence itself by about six years; well before anyone was able to cause a computer to show intelligent or quasi-intelligent behavior, Alan Turing proposed in 1950 in his paper Computing Machinery and Intelligence that, in short, machines could think. The long form of this claim, however, is more complicated; as Turing points out, the question "Can machines think?" requires a definition of both "machines" and "thinking," and since definitions widely vary on these terms he proposes instead the now-famous "Turing Test." Since this test has been widely reinterpreted and restated, it is presented here in its original form:
Imagine a psychological test in which there is a man (A), a woman (B), and an interrogator of either sex (C). (C) is in a separate room from the other two; he asks them both questions in an effort to determine which is the man and which the woman. It is (A)'s goal to try to confuse (C) by misleading or perhaps blatantly false answers; it is (B)'s goal to try to help (C), probably by giving truthful answers. Turing points out that it is perfectly within reason for (B) to say, "I am the woman," but since (A) can say this just as well, it is no help. Now, imagine that a machine takes the role of (A) and (C)'s goal is to determine which of the two is the machine and which the human. According to Turing, if (C) is ever incorrect in determining which is which, the machine has been shown to have intelligence.
Turing is careful to explain that this is a test of imitating the human mind, not the human itself: "We do not wish to penalize the machine for its inability to shine in beauty competitions, nor to penalize a man for losing in a race against an aeroplane" (Turing 42). He also makes it clear that the reason why the digital computer should be the machine to take the test is precisely that the digital computer is designed to carry out any operations which are possible by a "human computer," although it can certainly perform them faster.
In a preemptive strike against obvious philosophical objections, Turing addresses nine issues which this test brings up; two of these have the most obviously direct influence on AI as a field. The first is the "Argument from Consciousness," which states that a computer could "artificially signal" emotions or intentions but could not actually feel them. Turing's answer is an answer which has been repeated by AI researchers ever since: if we are able to create a machine that appears to have human-like intelligence, it does not matter whether the machine is "really conscious," because the only definition we have for consciousness at this point is the phenomenological one. If it seems conscious, it is. The same answer is given to the "Mathematical Objection," the claim that there are questions which computers simply cannot and will never answer, but which humans can. As Turing puts it, "there must be men clever than any given machine, but then again there might be other machines cleverer again, and so on" (Turing 52).
These claims established AI as a forward-looking field, one interested primarily in the ultimate potential of computers to come and the therefore admittedly limited potential of present computers. One of the questions this project hoped to cast in sharp relief is whether this is an ethical way of organizing a field of research. Turing's influence on AI, its philosophy and premises, cannot be underestimated.