The Discrepancy Between Results and Ideas: an Interlude

Perhaps the strangest thing about AI as an area of human knowledge is the huge discrepancy that still exists between its claims and its results. Most of the ethical issues raised by AI, in fact, arise from the ideas and goals of AI than from actually AI systems that are in use today. Although we will examine the ethics raised by those systems which are in place, that is, expert systems used in industry, the rest of the ethical issues this project will raise come out of society's conception of AI, the direction AI research is going, and finally, the very question of whether the discrepancy between AI's results and its claims is ethical. This section will therefore draw a distinction between AI as a philosophy and AI as a research program in an effort to clarify this distinction, so that the ethical questions of the rest of the project can be understood in context.

It would be a little mean-spirited to say that the philosophy of AI makes no distinction between science and science fiction, but I would like to propose that whatever basis the project of AI has in the history of Western philosophy, its primary quality has always been that it has consistently looked toward the future. Whatever philosophical basis it has taken, AI, like most sciences, has always claimed that no theory can be abandoned until it is thoroughly disproved. As long as it has not been demonstrated that universal symbol systems cannot demonstrate intelligence (and as long as the mind can still be described as a universal symbol system), one cannot eliminate GOFAI from the realm of AI.

This is a strange quality of AI, and it seems to arise from the interaction of a scientific and philosophical tradition which take vastly different views of theories. In the philosophical tradition, theories are judged, in some ways, on their ethics. A philosophical theory has implications for the way one lives one's life, and we judge a philosophical theory good if it leads to a way of life that we think is a good one, and we judge it bad if it leads to a way of life that we think is wrong. On the other hand, scientific theories are largely judged on two things: results of empirical testing using the theory, and how well they fit into prevailing "good" scientific theories. If they are both borne out by testing and fit into the prevailing scheme, then they are good and sustain the prevailing scientific paradigm. If they are borne out by testing and do not fit into the current scheme, then they are good and they prove the prevailing paradigm false, causing the paradigm to change: this is scientific progress. If they are not borne out by testing, then they are bad and are discarded.

The conflict in AI is specifically that it is trying to applying philosophical theories scientifically. This is a relatively novel problem in science today. Because AI represents the confluence of computers and a belief that intelligence operates in much the same way that computation happens in a digital computers, there is no pre-existing scientific paradigm to compare empirical results with. In other words, the results of empirical testing can only be compared to the philosophical theories, not to previous scientific theories which have been empirically borne out. The reason why we refer to changes in science as "progress" is that scientific theories build upon one another, and although at the beginning of Western science there are philosophical bases (largely the work of Aristotle), we are historically removed from them, and science has shown itself to be empirically successful enough times that until very recently we did not question the general idea of how science works. Nothing, however, is quite the same as the project of AI, and as a result we do not yet have enough of an empirical basis to rigorously test the theories of how it is to work.

This is why Winograd's critique of AI is so potentially damaging‹it questions the philosophical bases atop which AI is being built. It is therefore a definitively ethics-related question whether or not it is right for AI to make the predictive claims it makes, considering its infantile age and the current questioning in the realm of philosophy the very philosophical bases it takes as its premises. That is why it is reasonable to explore the ethics of AI's potential: the discourse of AI makes little distinction between AI as it is today and AI as we hope it will be tomorrow. Some would argue that we must make ethical decisions about AI before AI has grown too large for us to control it; whether or not this is true, we must make ethical decisions about AI today because they so closely affect the way we conceive of ourselves, and because we cannot deny that tomorrow AI may grow larger.

back