Next: Topic: Learning to Up: Lecture plans Previous: Lecture plans

Topic: Introduction to Connectionism

Terminology

Characteristics of Biological Networks

Characteristics of Connectionist Networks

Appeal of Connectionist Models

Brief History

Specifying a connectionist network

Architectures (simplest to most complex)

  1. Linear: feed-forward, one-layer, identity activation function (see Figure 2)
  2. Perceptron: feed-forward, one-layer, step activation function
  3. Backward-Propagation: feed-forward, multi-layer, sigmoid activation function
  4. Recurrent: feedback and lateral connections, multi-layer, non-linear activation function
  5. Constraint-satisfaction: symmetrical connections, no true layers, non-linear activation function
  6. Arbitrary: anything goes, hard to analyze

Some examples

For some simple problems it is relatively easy to determine an appropriate set of weights and thresholds without doing any learning. For example consider the AND and OR problems shown below in Figure 3.

Notice that a single layer network which uses the step activation function forms two decision regions separated by a hyperplane. The position of this hyperplane depends on the connection weights and threshold. Thus if the inputs from two classes are separable (i.e. fall on opposite sides of some hyperplane) then there exists a a single layer network to correctly categorize the inputs. However if the inputs are not separable, a single layer network is not sufficient. In particular a single layer network will not be able to solve the XOR problem because a combination of decision regions are needed. Figure 4 shows the kinds of regions that can be formed by multi-layer networks.



Next: Topic: Learning to Up: Lecture plans Previous: Lecture plans


meeden@cs.swarthmore.edu