Here are some options for covering neural networks in an artificial intelligence course. Each of the following topics could span an entire lecture or be briefly mentioned in a more quick and general overview. The first topic provides an introduction to the biological inspiration and appeal of neural network models. The second and third topics provide some historical background on simple correlation-driven learning and error-driven learning and lead up to the more complex back-propagation method. For students with a good mathematical foundation in calculus, a derivation of the generalized delta rule can be done in the fourth topic. The fifth topic can explore a wide variety of network applications including NETtalk [10] which learns to pronounce English text. After seeing a number of examples it is interesting to compare and contrast the styles of representations seen in network models versus those seen in more traditional symbolic models. In the sixth topic, the importance of including dynamics in networks leads to the exploration of recurrent models. Elman's article [2] provides a number of interesting natural language examples. Finally there are numerous more advanced topics which are also quite interesting. The instructor may wish to read [4][3] for more background on networks.