w += learningRate * (target - output) * input
For the following problems explain whether the function is linearly separable. You may want to use 3D pictures of cubes to visualize whether the functions are linearly separable. If a function is separable, determine a set of weights that solve the problem (you can do this by hand, you don't need to use the perceptron learning rule).
Q(s,a) += learningRate * (reward + discount * max Q(s', a') - Q(s,a))
-------------
2 | 0 | 0 | +1|
-------------
1 | 0 | 0 | -1|
-------------
0 | 0 | 0 | 0 |
-------------
0 1 2
We will represent the state in column,row format. Suppose that the
actions the agent can take are to go north, east, south, or west. If
it tries to go a direction that leads it off the boundary of the grid
then it remains in its current state, and receives the reward for that
state on that action.
After 500 steps of training suppose that the Q-table contains the
following values.
actions
state n e s w
0,0 0.73 0.69 0.65 0.65
0,1 0.76 0.81 0.65 0.72
0,2 0.00 0.90 0.17 0.00
1,0 0.81 0.00 0.35 0.48
1,1 0.90 -0.97 0.62 0.67
1,2 0.82 1.00 0.79 0.68
2,0 0.00 0.00 0.00 0.21
Using the grid below, draw an arrow in each location to show the
agent's current policy based on the Q-table.
-------------
2 | | | |
-------------
1 | | | |
-------------
0 | | | |
-------------
0 1 2