Expert Systems

The quintessential expert systems are chess-playing programs; the most famous of these is IBM's Deep Blue, which in 1997 defeated Garry Kasparov, the reigning world champion. Chess has long been considered the sovereign testbed for heuristic problem-solving computers, largely because it is somewhat playable by brute-force computation such as computers are very good at, but in order to play at a reasonably expert level on a human time scale it seems that the computer must be able to recognize patterns on the chess board just as a human does. It is generally assumed somehow that considering all possible moves using brute force does not count as intelligence, whereas recognizing "meaningful" chess board positions and recognizing patterns in the board is sufficiently intelligent. This example goes to show that, despite expert systems' apparent disconnection from the modelling of human intelligence, we are generally more willing to consider an expert system a form of AI if its behavior appears more human-like.

Nevertheless, because the only requirement for an AI expert system is autonomous problem solving, which for small domains of information are very doable, expert systems have become considerably more widespread and successful than more generalized human intelligence schemes; assembly-line robots, for instance, are expert systems, as are machines which read texts out loud for blind persons and programs which analyze trends in the stock market and make recommendations.

In general, the problem with pure expert systems is that humans, unsurprisingly, tend to consider more worthy and more intelligent systems whose intelligence appears more human-like. This means that as expert systems get "smarter," they by necessity start to take human models of intelligence as their guides, and quickly hit two problems. First, human intelligence is largely dependent on stretching across many domains, which goes against the idea of an expert system having a deep but narrow intelligence, and second, human intelligence is fallible. Expert systems are supposed to be better than humans because they are faster and use more rigorous algorithms and heuristics to find their answers; therefore, an expert system that uses enough of a human-like thought process is going to make mistakes.

Because of their built-in limitations (they are only, after all, designed to solve one problem) expert systems are no longer considered a particularly exciting realm of AI research; however, they are by far the most prevalent form of AI in actual use today, and thus the type of AI about which the most immediate ethical questions arise.

back