The file conx.py
implements a python library for
experimenting with artificial neural networks. The
adds better visualization capabilities to the
original library. Feel free to look at the source code if you're
interested in seeing how networks have been implemented.
In all of the examples below the network will be called
n. In the Python interpreter, you can use the following
n.showPerformance() will display how the network responds to
each of the training patterns. Initially it will get every
pattern wrong since the weights are randomly initialized and no
learning has taken place.
n.printWeights(layer1, layer2) will display the network's
current weights between layer1 and layer2. For
simple two-layer networks, the layers are called 'input'
n.train() will repeatedly train the network on the set of
training patterns. Each time through all of the patterns is called an
epoch. When the network is successfully learning, the total amount of
error should decrease over time.
At the unix prompt do: python -i and-net.py.
Before training, test the AND network's performance and look at its
weights. Then train the network and re-test its performance and check
out how the weights have changed. Do the weights make sense to you?
At the unix prompt do: python -i or-net.py and try all of the
same commands as before. Convince yourself that the weights make sense.
Next run the file xor-net.py in the same way. When you train
this network it will be unable to learn.
Run the file xor-3layer.py. In this case the network has
three layers (input, hidden, and output) instead of just two (input,
output). To see all of the weights for this network requires
After training this three-layer network, draw the network with all of
the trained weights and biases and figure out how it is solving this