# CS81 Lab1: Learning with neural networks

Due by noon next Thursday

In this lab you will familiarize yourself with pyrobot, a tool for controlling both simulated and physical robots, that is written in python. All of the source code for pyrobot is located at /usr/local/pyrobot/. Feel free to go check out any aspect that interests you. Pyrobot includes many machine learning tools, such as neural networks, which we will explore this week. To begin, run update81 to copy some starting point files into your home directory (cs81/labs/1/).

You may work with a partner on this lab.

Simple neural networks to solve logic problems

I have provided three examples of simple neural networks for solving the logic problems and, or, and xor.

1. Open the and.py file in an editor and read through the code. This is a linearly separable problem so a hidden layer is not necessary to solve it.

2. Train a network to solve the logical and problem:
```% python -i and.py
```
One pass through the entire data set is defined as an epoch of training. During training the total summed squared error across all the output units is reported. In addition the percentage of output units with activations within tolerance of the desired target values is reported.

3. To check the network's performance at the end of training, do the following within the interactive python session:
```>>> n.showPerformance()
```

4. To see a graphical representation of this same information, do the following:
```>>> n.showNetwork()
>>> n.showPerformance()
```
A new window will appear depicting the current activations in the network. There will be one box for each unit in the network. The whiter the box, the more positive the activation. The blacker the box, the more negative the activation.

5. To see the learned weights do:
```>>> n.printWeights()
```
Draw a picture of the network with its associated weights and biases and try to understand how it has solved the problem.

6. Quit python and train the and network again. Examine the final weights. Because the initial weights will be randomized before each new training session you should get a slightly different result each time.
Try the above steps on the other two logical problems or and xor. Which of one three problems tends to find a solution in the fewest number of epochs (one run through the entire data set)?

Try experimenting with the learning rate (called epsilon). Set it very low (at say 0.1) and set it very high (at say 1.0). How does the speed of learning change?

Using neural networks to control robots

One method for teaching a neural network to control a robot is called offline training and involves the following steps. First hand code a robot controller. Second use this controller to collect data to be used to train a neural network. Third train a neural network with the collected data. Fourth use the trained network to control the robot and test its performance.

1. I have provided a hand coded program called wallFollow.py that causes the robot to follow walls on its left hand side. To observe a robot using this program do the following:
```%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b wallFollow.py
```
The -s flag specifies which simulator you would like to use. Each simulator has a number of different worlds, which are specified by the -w flag. The simulator can control multiple robots simultaneously. Each robot has a name specified by the -r flag. Finally the robot is controlled by a brain, which is specified by the -b flag.

This will open two windows: one the pyrobot control window and the other the simulator window. In the simulator window, select the View menu and choose Trail, to see the robot's path. In the pyrobot window, push the Run button to start the robot. Every 500 steps the program will report the average distance of the robot from the wall on its left side. We will use data collected from a program running a controller like this to train a neural network.

To quit, first press the Stop button in the pyrobot window to suspend the brain. Then go to the Robot menu and select Unload Robot. Finally, go to the File menu and select Exit. If the pyrobot window fails to close, then in a terminal window do: killall -9 pyrobot.

2. Open the wallFollow.py program in an editor. Notice that it contains a class that inherits from a Brain class. In pyrobot all controllers are constructed in this manner. Each brain can optionally use the setup method for initialization. It will be run once when the object is instantiated. Each brain must have a step method. This is the what will be executed on each time step (typically about 10 times a second). The move command is what causes the robot to move. It takes two parameters: translation (forward/backward) and rotation (left/right). Each of these parameters can be in the range -1 to +1.
3. We will use the program collectData.py to execute the wall following behavior and collect data along the way. It is important to remember that all data used with a neural network must be scaled to match the range of the activation function. In our case, all data must be scaled between 0 and 1. To run the data collection program do:
`%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b collectData.py`
When you see a message that the data collection is done. Stop the robot and exit from pyrobot. You should now have two files called sensorInputs.dat and motorTargets.dat saved in your directory.
4. Next we will use the program trainNetwork.py to train the neural network on the saved data. We do not need to use the simulator to do this part.
```%python trainNetwork.py
```
This will take several minutes to complete. It will execute 1000 epochs of training on the collected data. Watch the error and percent correct during training. Does error consistently go down? Does percent correct consistently go up? After training is complete, you will have a new file in your directory called wallFollow.wts containing the weights and biases of the most successful version of the trained neural network.
5. Finally, we can use the saved weights to see how well the network controls the robot.
```
%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b testNetwork.py```
Be sure to "View" the robot's "Trail" in the simulator window before pressing the "Run" button in the pyrobot window. How does the neural network controlled robot perform compared to the hand-coded teacher program? In what ways does the neural network controlled robot behave differently from the teacher?

You will go through the same series of steps as above, but this time using your own hand-coded teacher program. Feel free to explore other worlds in the pyrobot simulator. You can look at the source code of any simulator world in:

```/usr/local/pyrobot/plugins/worlds/Pyrobot/
```
You may even create your own simulator world. If you create your own world it can be loaded directly from your home directory.

You may also explore other robot devices beyond sonar, including light sensors, cameras, and grippers. Below is a summary of how to use each type of device. In order to experiment with a device it must be added to the robot that you are using in a particular world. You can experiment with these devices by typing the commands described below in the command-line of the pyrobot window.

• Sonar Sensors
The Tutorial.py world includes a robot equipped with 16 sonars, which are numbered 0-15 with 0 starting at the front left and continuing clockwise. You can access the value of any particular sonar sensor by doing:
```robot.sonar[0][sensorNum].value
```

• Light Sensors
The Braitenberg.py world includes a large light emitter in the center and a robot equipped with two front light sensors, numbered 0-1 with 0 being the left and 1 being the right. You can access the values of the light sensors by doing:
```robot.light[0][sensorNum].value
```
If you'd like to see a brain that uses light sensors, try running the brain BraitenbergVehicle2b.py in the Braintenberg.py world.

• Camera
The Tutorial.py world includes several colored objects and a robot equipped with a camera. An easy way to experiment with various image processing tools is to use the menus in the pyrobot window. Go to the Devices section, select camera[0], and then press the View button. This will open a camera window.

Next, point the robot so that it is facing the blue box in the upper-left hand corner of Tutorial.py world. In the camera window, left click the mouse on the blue color; this will turn all the blue color to red. Notice in the pyrobot window that you've just added a match filter, which takes three parameters for RGB values. Next in the camera window, select Filter, then Blobify, and then Red. To see all the filters you've created, in the camera window, select Filter and then List filters. The current filters will be displayed in the pyrobot window. If you were doing these steps in a program, you could use these filter commands directly and would only need to do them once at the start of the program.

The filter results will be numbered in the order that you added them. In this example, the match filter will be 0 and the blobify filter will be 1. You can access the camera filter results by doing:

```robot.camera[0].filterResults[filterNum]
```
The blobify filter returns a list of lists. Each sublist represents the information on one blob. The sublists are ordered from largest to smallest blob. Typically we only focus on the sublist numbered 0, representing the biggest blob in view. To get the largest blob's data do:
```robot.camera[0].filterResults[1][0]
```
This will return a list of five values representing the bounding box of the blob and its area: (x1, y1, x2, y2, area).

• Gripper
The CamWorld.py world includes a robot with a gripper and some small pucks that can be picked up and moved around. You can control the gripper by doing:
```robot.gripper[0].gripperCommand()
```
You can see a list of all the gripper commands by going to the Device area of the pyrobot window and selecting gripper[0]. Then press the View button.

• Simulation (to get and set pose of entities)
Through the simulation device you can find out and set the locations of various entities in the simulation such as robots and pucks. In order to access an entity in the simulation you need to know its name, as designated in the world file. For example, in the Tutorial.py world, the robot is named "RedPioneer". To find out the robot's location do:
```robot.simulation[0].getPose("RedPioneer")
```
This will return a list of three values representing its x, y location and heading in radians. To move the robot to a particular location do:
```robot.simulation[0].setPose("RedPioneer", x, y, heading)
```
where x and y are within the bounds of the world and heading is in radians.

Submit

Put all of the files you created in the cs81/labs/1 directory. In the summary file provided, be sure to describe what your teacher program does, how you trained the network, and your results.

When you are done, run handin81 to turn in your completed lab work.