Week14 Reactions

Reading:
I saw this posting for a conference on Computational Models of Creative Cognition and thought it might be relevant to our discussion this week.

How can true human creativity, in language, art, or any another mode of expression, be claimed to arise from an algorithm, a mere computation, as many proponents of Artificial Intelligence would claim?

Creativity may be viewed in its most general sense as a meaningful stretching of common conventions and rules; but one (either human or mechanical) must be aware of the inherent limitations of such rules before they can be stretched in any way that can be considered purposeful and not wholly random. It is a truism then (and somewhat of a cliche) that an inventive system must be able to 'jump outside of itself' if it is to exhibit any sign of what one would consider creative behaviour.

Nonetheless, even if a computational system does possess enough self-knowledge to perform this leap of creativity, it can of course still be said to follow an algorithm-an inventive algorithm to be sure-but one that is still rule-bound. This latter point alone is often considered a sufficient basis for dismissing creative computation by AI's critics.

From an alternative perspective then, it is not only necessary for an AI theorist/engineer to demonstrate why he/she thinks a system is indeed acting creatively; it is also necessary to demonstrate that human creativity is, at its root, an algorithmic, rule-bound process (even if these rules are stochastically founded). Creativity does after all need to be constrained in some important senses; if one breaks to many rules the result is absurdity, not creativity.

Theme:

This conference, which is a sequel to the original and successful Mind I conference held in Sheffield, England in 1995 (proceedings published by John Benjamins 1997 as Two Sciences of Mind), addresses itself to these key questions of creativity in AI and Cognitive Science.

Aspects of creativity covered by this conference include, but are not limited to, the following topics:
Metaphor (linguistic and otherwise; conceptual models thereof)
Analogical Reasoning (generation and interpretation of interesting analogies)
Lexical creativity (blending, etc.)
Creativity as insightful reminding/reuse
Creative rephrasing for machine translation
Knowledge Discovery in large (world) knowledge-bases
Realms of Mental Bisociation (in the sense of Koestler)
Humour (verbal wordplay and narrative surprises in language; humour in other forms of expression such as cartoons)
Story Generation / Understanding
Picture generation
Music composition
Generation of insightful explanations
Creative environments (brainstorming environments, story authoring tools, etc.)

Program Committee:
Victor Raskin, Purdue University, USA.
Stevan Harnad, Southampton University, UK.
Mark Turner, Princeton Institute of Advanced Study, USA.
Douglas Hofstadter, University of Indiana at Bloomington, USA.
Bipin Indurkhya, Tokyo University of Agriculture and Technology, Japan.
John Barnden, New Mexico State University at Las Cruces, USA.
Ashwin Ram, Georgia Tech, Atlanta, USA.
Arthur Cater, University College Dublin, Ireland.
Ronan Reilly, University College Dublin, Ireland.
Mark Keane, Trinity College, Dublin, Ireland.
Tony Veale, Dublin City University, Dublin, Ireland.


Alex Robinson

Chapter 10

I found the letter spirit chapter interesting, but like Hofstadter & McGraw said, the sort of thing that goes on in GridFont I do not think resembles the what goes on in an actual human brain. I still find it interesting as an organized attempt at tackling the idea of creativity and style in an element that is probably less objective than many (such as poetry and imagery). I wonder however if she bother to try to understand how creativity works until we have a better grasp of the inner structure of more fundamental aspects of the brain. However, from a functional perspective of trying to implement a font generator, Letter Spirit is well justified. My greatest suspicion and fear with many of these projects that attempt to tackle one aspect of human cognition is that our brain only works how it does with almost everything in place. Hofstadter questions at what level of detail do we capture the essense without reducing it too much, but what if every level of structure in the brain interacts with all the other levels? Creativity may require the integration of many different areas, but still rely on strings of neurons that cross over to other functional areas of the brain. Convenient layers of abstraction can be not be easily created. I suppose I need to talk to a neuropyshcologist to find out how localized the brain can be, but I wonder if we will see much different behavior when neural network type simulations are modeled in a one-one fashion.

Epilogue

I have always liked the Turing test for its ability to confound people's human-centric definition of intelligence, but otherwise it seems that it is often not necessary to use for AI projects, because the framework of machine can be accesible. A lot can be learned from external behavior, but will it teach us something we cannot learn from looking at the framework of the machaine?


Roger Bock
Letter Spirit
pg 407 - I like the title: "The Goal of Imparting a Sense of Deep Style to a Machine." Deep Style sounds very impressive. It's kind of like Deep Blue, only with a fashion sense.

pg 417 - What's an aleph?

pg 432 - What's a fillip?

pg 436 - What is the k-armed-bandit problem?

pg 453 - Nothing thought-provoking yet, just more vocab questions. Serifs?

pg 462 - I guess I'm kind of disappointed with the fairly abject failure of the connectionist version of Letter Spirit. I would definitely not consider it a success. However, is it the best try yet at generalizing fonts from a few seed letters? It is unclear to me whether Letter Spirit actually exists, or whether it's just an idea in Hofstadter's head. It seems unfair of him to rip on GridFont when he has no better functioning alternative. I think it reduces Hofstadter's credibility to be attacking this other model when his own results are not presented or don't exist at all.

pg 464 - I think it's unfair to say that GridFont just spits out its answer without any prior consideration. I'd argue that the thousands of cycles of training on the training set count as GridFont thinking about the problem. Although GridFont is not remotely close to what is going on in actual human minds, I'd argue that Letter Spirit isn't that much closer. I find it amazing that someone spent three years thinking of a font. Is font-creating a profitable industry, or just fun?

pg 465 - "The hubris of the authors of GridFont is surprising to us." Reading this sentence I couldn't help but consider how it would sound if GridFont was replaced with Letter Spirit. The article was so brilliant and thought-provoking up to this section, I wish the authors hadn't decided to finish the article by mean-spiritedly attacking a parallel research project.

pg 466 - Does modern psychology agree with Letter Spirit's idea of how creative thinking is done in this domain? What's the Boolean Dream? How is the "naive blanket faith" in hidden layers any different from a naive blanket faith in codelets? Does Letter Spirit work? Is it coded at all? Have any of its components been tested?

Epilogue
pg 470 - What's the Eliza effect?

pg 475 - I'm pretty impressed with Racter, especially that it was written in BASIC. I, too, would like to understand how much of the prose is created from scratch. I wonder if Racter is nothing more than a complicated Madlib.

pg 476 - Kind of neat, I'd never heard of the Goldbach conjecture before. I'm still impressed by the AM program. I think people shouldn't be that disappointed that Lenat acted as a filter. Although pure machine intelligence may be out of our reach, human-assisted machine intelligence is still pretty impressive (as opposed to machine-assisted human intelligence). How much study is done on these hybrid systems?

pg 482 - Would a dumb/retarded person fail the Turing test? ie would the person on the other end think they were a program? If the answer is yes, I wonder how good a test it is.

pg 484 - What is an object file, a geon, a 2.5-D sketch, and the society of mind model?

pg 486 - What is KLONE, and what does abstruse mean?

pg 489 - It seems that the Turing test is a test of human-like intelligence. Is it possible you could have a true artificial intelligence fail the test merely because it was intelligent in a way unexpected by us? Is time really the most common noun and why would that have any bearing on the true nature of the artificial intelligence being examined? What does distal mean?

Jonathon Shlens
4/20/97

LETTER SPIRIT
- I had a few questions with Hoftstadter's criticism of DAFFODIL. DAFFODIL's product is the goal of demonstrating creativity in making fonts. He criticizes it though because the programmer must add into it what type of style he/she wants in the font. It seems like to me that "if it walks like a duck and talks like a duck, then maybe it is a duck." In the same fashion as the Turing test, all one is concerned with is the final product. A programmer tells the program what "style" of font he/she wants, and the program outputs it. Therefore, although it does not understand the "Platonic letter," it still has at least a graphical (only graphical) concept of what the letter looks like and thus is able to create stylized fonts. Furthermore, he seems to criticize daffodil because it "has not ability o perceive or judge anything it has produced." The ability to tell "whether it is attractive or ugly" seems to be a slight exaggeration - I don't Letter Spirit will even be able to perform such a task when completed.
- looking at the list of types of "a"s on page 424 made me realize that I can not identify over 50% of these types of "a"s without seeing the context in which they are in. I guess the context of the letter is if not even more important than the design itself in identifying it.
- I do understand how one would train Letter Spirit on a font. I assume that the learning process will not be like DAFFODIL - inputting the style - but it will be something like presenting the letter font and the corresponding letter type and letting it build an association between the two?
- in the "parallel terraced scan" the program examines the "degree of estimated promise of a future pathway." As he says in the next line, this is similar a genetic algorithm. This makes me ask the question what is the "selection rule" which determines the "fitness" of each future pathway? It seems like by determining the "fitness" of each pathway, the program is just repeating the same work twice: once to determine the fitness and twice to actually do it again.
- in his description of "Conceptual Memory," he describes its several facets: e.g. "a set of category membership criteria which ... specify how instances of the concept can be recognized in terms of more primitive concepts" and "explicit norms attached to the roles which serve to distinguish very prototypical instances of the concept from more eccentric ones." This description of the memory seems to me to diverge from the original goal of attaining "Fluid Concepts." Is that the case or I am not fully understanding the system?
- I like the idea of an "associative halo" between different concepts with weights specifying the length between them. This piece of the system seems to meld some connectionism into the program.

EPILOGUE
- I really like these example of various "feats" which AI programs have performed. Although they are indeed "minor" as he criticizes them, I still think that they have some notable worth. He pointed out that what is important is the "implementation detail." It seems that these program at least provide a start and that later in development, one can slowly program greater and greater detail in each version until something noteworthy comes about.
- also, I liked the talk about "hardware vs. software." As I was reading this, it seemed to parallel the debate of symbolic vs connectionist (or psychological vs. physiological). Symbolic programming seems to be our attempt in creating a purely software imitation of the brain where as connectionist programming seems to be our purely hardware imitation of the brain. Surely, I think it is vain to say that the brain is purely hardware-like (side-note: would that support a Chomskian view of the brain?) or software-like. I guess there is some warrant to making systems which combine features from both sides.
- also a final comment. In his description of Computer mathematicians, I was reminded of an article I saw in the NY Science Times in the fall about a "reasoning" computer program which solved a mathematical proof (it was solely programmed to reason and programmed for any specific task as I understand it) which stumped mathematicians for 50 years (it was solved by a mathematician only a decade or so ago). Also, the programs proof was an original proof, different than the one created by the mathematician. Anyways, I thought that it was a pretty interesting article.

Tom Kornack
AI and Creativity

What is creativity, after all? What is really new and what is just old stuff put together? We are indeed quite quick to call humans creative and computer not creative. At the moment, I would indeed agree that computers are merely powerful parrots of programmers. But I also believe that there is nothing inherent about computer code that prevents them from being creative. By now weıre all accustomed to accepting that brains are many magnitudes more complex and large than the largest neural networks. But we also might accept that the neural nets arenıt doing anything fundamentally different than brains (...er, well, perhaps this isnıt true, but letıs just run with it.) Is it not a matter of massive upward scaling and tremendous patience before computers can appear to be creative? What gives? From my perspective, creativity is a fairly subjective measure that comes about when we encounter any observation that is not trivial to us. It seems that anything that we call a creative thought - a novel thought - can be called a connection of existing ideas; is there any such thing as a novel thought that was completely unmotivated? (A connected questions: are there purely random numbers?) At the moment, though, we can wrap our brains around these neural networks and are able to predict their behavior on a macroscopic level. I am thoroughly skeptical that we will be able to come anywhere close to the processing power necessary to effect creativity in my lifetime. And it is for this reason alone that I donıt think computers can ever be called creative.

In the case of Letter Spirit, I would say that a thorough implementation of the outlined code in a backpropagation network would in fact be creative in the new letters. It is not a trivial task to imagine the rest of the characters of Optima. As such, I would call the network creative in its successful creation of new characters. Additionally, most of the odd generated characters in GridFont could be cleaned up if you imposed some additional constraints like line continuity and low stroke complexity.


Elaine Huang

Fluid Concepts and Creative Analogies
Douglas Hofstadter

Whoo-hoo. Cool article. Really made me think about what exactly human creativity can entail, and if a machine can ever be creative, or whether it can merely simulate creativity.

Elaine on Letter Spirit:

Given the definitions and restrictions that Hofstadter and McGraw place on creativity, I'm not sure that we could actually get a program to be creative. I'm not even entirely sure that Letter Spirit, as it is proposed here, is truly creative in the way that they have defined creativity. A flaw that they point out about other programs which generate fonts or other work is the lack ot novelty- the fact that other programs mimic using tricks or pre-entered components representative of human, not computer, creativity. But in actuality, isn't this exactly what human creativity is? Do humans truly come up with novel new concepts, or is the brilliance behind human creativity the ability to take what we already know of our environment, and to combine them in novel ways? these combinations, built upon each other, seem to yield pretty much everything that humans have devised and termed "creative." Letter Spirit, if it is ever implemented as described, would mimic human ability to take an idea and then apply it creatively to a set of distinct items, but even this involves the clever coding tricks that the other programs use, no? The authors make reference to the Eliza program, which cleverly picks up key words and phrases and uses them to conduct a seemingly intelligent conversation, when in actuality, they program has no consciousness or true understanding of what it is saying. Would not the creativity of Letter Spirit do much the same? It would pick out traits and judge them according to criteria determined by human code. Would there actually be an opinion, an understanding? It seems that there would be little more than a fitness measure, which seems to me to contain a comparable amount of understanding as Eliza. I have a very hard time convincing myself that the program described by Hofstadter and McGraw actually has more understanding and judgement than programs such as Eliza. It seems to rate and select and discard by means of human coding, without being any more intelligent. It may be a more accurate mimicking of human creativity, but I can't really justify calling it creative in and of itself, at least not any more than the other programs to which Letter Spirit is compared.

Elaine on the Connectionist Approach:

Hmmm. Maybe I'm biased, but I still like the idea of a connectionist approach to gridfont generation. Though platonic letters may be nebulous, abstract concepts, one can see quite easily by looking at the various gridfonts and other fonts presented that the fonts themselves are quite patterned. It would seem that a neural network would really be an effective tool for this task. The letters that the network creative are impressive to me. The authors state that the fact that the network merely cropped the j to creat the i makes the feat less impressive. I find this impressive however, because it seems that this is much how a human might generate this letter. Given the seed letter j in a given gridfont and asked to create an i in consistent font, I think I would hack off the bottom of the j as well. The fact that the network was able to realize that in most gridfonts, an i is a j without the hook seems more a useful, human-like generalization than a cheap trick. At the same time, I wonder how useful a neural network could be for general creative purposes. If we trash the whole Cartesian idea that nothing a human can create is truly novel, then creativity may be an unpatterned ability, thus making a neural network a difficult architecture for modeling creativity. Finally, I was really surprised by the comparison made between the neural network GridFont and a symbolic, specific chess playing game. It's interesting that the authors likened a neural network to the classic example of symbolic AI. I'm not sure how valid this is, since it seems that neural networks don't just explore every possibility in the space. I have to think about this more. The way it is presented, it is quite appealing. It also makes me think, well, if brute force is effective, and guarantees finding a solution if one exists, then why not take advantage of it? We have mighty fast processors, and huge memories now in current computers. Humans cannot effectively utilize brute force in their cognitive processes, but computers can in many cases (ie, Deep Blue) could we possibly solve more problems if we use brute force, rather than attempting to model the subtleties of human thought, such as creativity. This is a tangent. Science for the sake of knowledge. Truck on, Letter Spirit. Show me what you can do!

Elaine on Aaron:

Aaron does not perceive its drawings or interpret them as a human would. The authors use this is proof that Aaron does not model creativity in the way that Letter Spirit would. But if Aaron, or some other computer artist, were implemented with a Letter Spirit style of creativity, would it then interpret or perceive it as a human would? I'm not so sure. I can't get over my stubborn idea that even if we built a bazillion criteria on which the computer judged its own work, it still wouldn't give a flying fig. If this is the case, then what is the benefit of building the creativity in? If Aaron can create work that looks like human artwork, and if it were sophisticated enough such that no human not familiar with the program would be able to distinguish it from human artwork, then they would undoubtedly judge it as human artwork. Would it matter if the computer judged its own work, or the work of another computer, as good or bad? If a computer did a series of drawings in its own style, and a hundred human critics judged the work, and a hundred computers judged the work, would any human take the computers' judgements to be more valid? If a computer could be made to care whether it was satisfied with its own work, it would only be doing so because it was programmed by a human to do so. Is this comparable with the biology of the human brain? Is it the same as saying that the only reason we would care if our artwork satisfied ourselves is because we are biologically programmed to do so? It is kind of scary that a computer might actually generate beautiful or moving work without actually having feelings or motivations other than simulated ones programmed in by a human. The argument that nothing a computer ever made could be as beautiful as something painted from the heart by a human is a hard one to make if a computer could paint exactly what a human could without having real emotions. But then again, nature generates beauty without real thought, so why should we be scared if a computer can do so as well?

Elaine on Racter:

Hey this program is really neat. I want to know more about how the sentences are generated though. Having some familiarity with Eliza, the continuity of the vocabulary used is not surprising. But some of the sentences are quite long, and complex grammatically. How were the rules programmed in? Are there just templates for structures? Also, since it seems that this would be massive, and hard to code precisely, can this program make errors? Are there sentences which simply make no grammatical sense which are generated? The passage about steak loving lettuce and Bill being obsessed with Diane is a bit frightening, depending on how much the computer generated and how much of it was processed beforehand. It would be extremely impressive in its simulation of creativity if it started for the most part with just a bunch of words to select from and rules for combination. But it seems as though it was given much more structure than that, perhaps some ideas about human irony and concepts of love? What was the purpose of the program? Was it meant to demonstrate that computers can simluate creativity, or was it meant to be more entertaining? If the latter, than I would not be surprised if a whole lot of background info, not only about vocabulary and grammar, but about literature in general was preprogrammed. The authors' confusion over Nikolai is funny, but oddly disturbing...
This program managed to somewhat convincingly mimic human writing, humour, some understanding of complex concepts such as love, and a consciousness of self (in the silicon and epoxy comment). Makes you wonder- what cannot be mimicked using clever programming tricks? If clever programming can achieve anything that can be done with broader methods, such as neural networks and Letter Spirit, do the ends justify the means?

Elaine on AM:

The authors present the filtering of a program's conclusions almost as a way of cheating. However, if we look at this idea from a different angle, it seems almost as though this makes the programs more human. After all, what human author, artist, or mathematician includes all work along the way in what is finally presented? Humans do a tremendous amount of self-filtering in their creative processes. Letter Spirit does some as well. But it would seem that human arrogance and self-perception would never allow us to say that a computer's judgement of art or music or literature would be more vaid than our own. If the hundred computers judged a piece of artwork as bad, and a hundred human art critics loved it, could we ever say that the computers had a more valid opinion? If we reached the point of sophistication that computers were judging work alongside humans, wouldn't we still give more sway to the human subjectiveness? Perhaps then the 'human-machine hybrid' is really optimal, since we're more likely to trust human judgement of art or literature. If a computer can come up with a whole bunch of stuff and a human picked through it and put out the best pieces, I think we'd be more likely to agree with those choices than if a computer generated a whole bunch of stuff and then picked through it itself. Maybe this is not a cheap trick so much as a neat way of utilizing a computer's ability to generate and a human's ability to judge.

Anyhoo. This was a really neat chapter. Let me think about it more.


Dave Lewis

When I went to Governor's School, one of the focuses was self-exploration. To that end, we took a couple of personality tests. One, based on the Gregorc model of personality, measured along two axis, Concrete-Abstract and Random-Sequential. Reading Hofstadter's paper made me think of the Concrete-Abstract dichotomy and the connectionist-symbolic dichtomy. Hofstadter writes in a very abstract style, and not surprisingly, he holds the symbolic view (without coming out and saying the 's' word). Others, who tend to write in concrete (and often more boring) styles, hold quite direly to connectionism. So I postulate that the great chasm in cognitive science between these two schools of thought is based more upon personality types than anything else. Discuss.

One of Hofstadter's main problems with connectionism seems to be that he wrote this paper before Jordan and Elman developed the recurrent network. Ie, he sees creativity as an incredibly dynamic and active process, and feedforward networks as stochastic, deterministic production machines. But with recurrency, the self-interaction and meandering that he desires can be had. Indeed, on page 443 he calls creativity a "process . . . in some sense recursive." (Sidenote: he uses recursion and iteration in this same paragraph to mean basically the same thing. An interesting usage, coming from a CS professor. Abelson and Sussman would have a fit.) Later in the hucksterism cum book chapter (see, I can use Latin, too), on pages 460-461, he talks about the misnomer of calling a network producing output "processing". I think he says this because a normal feedforward network doesn't do the cyclical processing he theorizes for humans; however, I think processing is still an apt name for the process of interaction of weights, biases, and activation to produce some output.

Another reason Hofstadter dislikes connectionism is because he keeps looking for symbols. For thought, he wants a model to manipulate abstractly at some high level of representation. Obviously, he's not seeing this in neural networks. He doesn't speak about PCA or cluster analysis, however, which in some senses reveals the symbols the network has chosen. Page 464: "Even supposing that the results og GridFont had been superb, it would still be the case that this kind of thing is not even remotely close to what goes on in actual human minds. So, after talking about how the results of GridFont weren't up to snuff, so it is obviously the wrong model, he says that he would discount their results anyway. Makes it kind of tough to prove anything to the guy. My favorite line, though, was page 465: "In any case, such a success by GridFont itself or by Great-Great-Gridfont is sheer fantasy at this point." Kind of like Letter Spirit, which he long ago stopped talking about, and never really talked about more than the concept. In fact, didn't Jim Marshall say that Letter Spirit was still very much a work in progress?

Why are mathematics and chess atypical of cognition?

In regards to Hofstadter's (and Boden's) belief that Geometry was a trivial model since it didn't know what was new and interesting, whereas a human would, I think that this is a product of our culture, not a human fundamental. What I mean is, in many classes students are forced to recreate proofs that have been done previously by others. And when they're doing them, do they really know if what they're doing is new or surprising? If it is surprising, does it mean that no one else has done it before?

For Bonus Points: What personality was I on the Gregorc scale? And bonus points for everyone if I can find the test, 'cause I'll bring it to class and we can all take it; it's short and pretty fun.


Aaron Hoffman

Its quite refreshing to read Hofstadter.

I think that letter spirit is really quality work. Unlike the ridiculuous models that tumble down the bunny hill, letter spirit seems to model human cognition (in the appropriate micro-domain) fairly well. Although the program obviously cannot tackle problems in other domains, the flow from Imaginer to Drafter to Examiner to Abstractor and back and forth as necessary does begin to approach "a general framework of inteligence."

The fundamental question is: what does letter spirit tell us about cognition?

It tells us that creation is a dynamic, temporal, unpredictable process - but we knew that already - we could get that from SOAR. Letter Spirit is more interesting because in modelling creativity it tackles problems whose solutions are not specified.

But what do we learn from it?

It also models concepts as living in a space where distance is defined (each concept has a halo of concepts which live near it). Perhaps it would be better to say that he models a concept as a probability distribution over a space with a distance defined on it.

Cognitive psychologists have been doing that for years

The most interesting part is the dynamic interaction between the hierarchical (heterarchicly tangled?) sets of constraints. Top-down constraints concerning the spirit of the gridfont interact with bottom up constraints concerning the Platonic category membership rules. Codelets run around and pour glue on joints, structures are shaken, norm-violations are checked ... this huge search space is covered by a pretty neat parallel terraced scan and eventually it converges to a (hopefully) consistent gridfont.

Here is my problem. As much as I am a cognitive scienctist in training, I am also a mathematician in training.

Perhaps creativity is necessarily dynamic and complex, but in Hofstadter's current model, I can't understand how these processes facilitate a mapping from a platonic alphabet to a gridfont realization in the same way that I can understand that a neural network is expanding the domain of a mapping between two vector spaces by constructing a basis for the function space out of sums of sigmoids. I'm resigned to thinking of cognition (at least in Letter Spirit) as this thing that is sensitive to initial conditions and like a roulette wheel or stock market, just has to be implemented to be understood.

In some ways this is good. I have decided that neural networks (at least backprops without context units) aren't any more exciting than ANOVAs or other standard statistical tests - and I would hate to have to decide that cognition is boring.

However - I really want to know what is going on. Maybe I'm naive to think that everything elegant has an elegant mathematical formulation, but it would be awfully nice.

In response to the rest of the article: he was right to criticize the other programs - they were silly.

As far as the validity of the Turing test goes - I haven't chosen sides yet on that one. Its hard to imagine a machine whose behavior is indistinguishable from that of a human, yet whose internal mechanisms are like Searle's conception of the Chinese room. Still - I'm skeptical.


Carl Wellington - Hofstadter

I really enjoyed reading the two articles this week. I found the Letter Spirit problem very interesting. I was amazed at the variety of letter styles which could be realized in such a constrained architecture. It was also interesting looking at the letters in Fig. X-6 individually and seeing how much more difficult they were to decipher than when they were seen in the context of their style. Hofstadter's ideas about the creation of gridfonts through a recursive interaction on many different levels were fascinating, but the lack of a working program which realizes these ideas made me wonder about the feasibility of such a system. I agreed with most of his criticizm of neural networks but I wonder what he thinks of more recursive architectures like the Elman nets which we studied earlier this semester.

The Epilogue discussed some much deeper issues which I feel are at the very heart of artificial intelligence research. As this course has progressed, I have actually been less impressed with many of the architectures which we have studied. Coming into the class, a neural network was basically a black box to me. I knew they could do really amazing things with pattern recognition and generalization but I had no idea what they were actually doing. As Aaron has often pointed out, they are essentially just performing a linear transformation of sigmoid functions, or to put it another way, using regression to find a best fit line. Once I understood this, it became much clearer how a net can solve a problem using this mapping procedure and which problems would be solvable. In his criticizm of neural networks in the Letter Spirit article, Hofstadter somewhat deconstructs what the network is actually doing and shows why its perfect reconstruction of the letter 'i' is not really that impressive since it is so similar to the letter 'j'. His criticizms of the other programs in the epilogue were based on similar arguments. The programs used clever techniques to create the illusion of creativity but were in fact deterministic procedures which simply produced output according to the instructions which the designers gave them. I strongly agree with Hofstadter's criticizm of these projects and think that it is justified. The Eliza effect is very strong so many people (myself included) want to believe that the art, music, mathematical theorems, or literature which these programs create (or more correctly: output) is the result of intelligence but the only real intelligence is found in the programmer of these programs, not in the programs themselves.

The above discussion brings up a much deeper point which Hofstadter also discussed. How can a program actually "know" something, or harder still, how can it know that it knows something? Does a database "know" the material which is contained within it? Can the database be written so that it knows that it knows that material, or that it knows that it knows that it knows that material? Recursive structures of self-awareness like these bring to mind Godel type arguments against any real understanding by a program, but I won't go into that here. Hofstadter uses the analogy of putting together a jigsaw puzzle and claiming to have created a work of art. This seems very similar to Searle's Chinese Room argument. It seems like these arguments are very valid criticizms of the AI programs which are out there today. They may be able to do very impressive things but they are simply interesting problem solving methods with no real understanding or intelligence.

The question to ask, therefore, is "How can we tell if something actually is intelligent or creative?" Hofstadter says that he does not feel that a product is creative simply because of external, objective reasons without any account of how it was created. I agree. My friend Tim took a board used as a spatter shield when his house was being repainted and cleaned it up and framed it. The end result is something which looks very similar to a Jackson Pollock painting which I saw in an art museum, but I don't think the painting on Tim's wall could be called creative (although possibly Tim's use of the scrap board could be). However, since something which was claimed to be intelligent would most likely be essentially a black box like our brains are, it seems like some test such as the Turing test would be necessary to descern whether something was actually intelligent. However, it seems plausible that a very clever program could actually beat many humans in a Turing test (be perceived as more intelligent) even if they had absolutely no real understanding or intelligence. For instance, chess is a limited enough domain that brute force methods now consistently beat human methods, but these programs definitely don't have any sort of intelligence or understanding. Perhaps someday the Chinese transformation rulebook of Searle's Chinese Room will be created. Then how is this different then an intelligent human brain? Hofstadter believes in the probes of the scientific method, but what if these weren't powerful enough? A spring powered watch and a battery powered watch perform their function using completely different methods, but from an external observation they seem identical. If two things are outwardly identical but use very different processes, how can we tell if they are actually equivalent?

Anyway, I'll wrap up my rambling. I personally doubt that an understanding of intelligence is possible without a very intimate understanding of the inner workings of the brain. I also believe that it is arrogant to believe that we can understand how the brain thinks by looking at high level descriptions and ignoring lower level activity. The failure of high level cognitive models seems to point to this conclusion as well. I also think that the notion that a predicate calculus or some other formal system could produce consciousness, creativity, and understanding is absolutely ludicrous. Human thought is not simply an algorithm.