## CS63 Artificial Intelligence

### Exam 1 Review

#### History of AI

1. Throughout the history of AI, there have been two contrasting approaches: symbolic and subsymbolic. The symbolic approach dominated ealry on, and the subsymbolic approach is currently dominant. Compare and contrast these two approaches to AI.
2. Rather than focusing solely on one of these approaches, why do some researchers advocate for an anarchy of methods?

#### Intelligent Agents

1. Define the term agent.
2. Give an example of an agent and describe how it meets the criteria of the definition.
3. Define artificial intelligence in terms of agents.
4. AI agents can operate in environments that are fully or partially observable, single or multi-agent, deterministic or stochastic, static or dynamic, and discrete or continuous.
1. Which properties are associated with the most challenging environments? Give an example of such an environment.
2. Which properties are associated with the simplest environments? Give an example of such an environment.
3. Solitaire is a card game which one can play by oneself. There are many variations, but all begin with a shuffled deck of cards that are then dealt out in various ways. Characterize solitaire based on the list of environmental properties given above.

#### Solving Problems by Searching

1. Define in your own words the following terms: state, state space, search tree, search node, goal state, and action.
2. What kinds of problems can state space search be applied to? How do you formulate a problem so that this technique can be used?
3. This is a slight variation on the water pitcher problem that we talked about in class. You have three pitchers, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You can fill the pitchers up to the top, pour them from one to another, or empty them onto the ground.
• List all of the possible actions for this problem.
• Describe how you would represent states for this problem.
• Suppose that you need to measure exactly 1 gallon into the 3-gallon pitcher. Draw a portion of the state space that shows what series of actions and states would lead to this result.
4. Using the basic state space search algorithm, we can do breadth-first search, depth-first search, greedy search, uniform cost search, or A*. How do you modify the algorithm to get these different search strategies?
5. How do depth-limited search and iterative deepening search work?
6. We compare search methods based on completeness, optimality, time complexity, and space complexity.
• Define what it means for a search algorithm to be complete.
• Define what it means for a search algorithm to be optimal
• When do uniform-cost search and breadth-first search perform equally well?
• When taking completeness, optimality, space complexity and time complexity into account, which uninformed search method is the best choice? Explain why.
7. Consider the graph given below. List the nodes, in order, that will be visited by the search algorithms as they try to find the goal node, labeled Z, beginning from the root node, labeled a.
```            /-----a------\
/              \
/                \
b1                  b2
|                 /    \
cx             c3        c4
|             /   \     /   \
dx           d5   d6   d7   d8
|           / \  / \  / \  / \
Z           I J  K L  M N  O P
```
• Depth-first search
assume children are queued in right to left order
• Depth-bounded search, with bound 3
• Iterative deepening search

#### Informed Search

1. What is informed search? Define the term heuristic and describe how heuristics are used in informed search.
2. In A* search we calculate f(n), which is the estimated cost of the cheapest solution through node n, and is given by the equation:
```	  f(n) = g(n) + h(n)
```
What are g(n) and h(n)? What restrictions must apply to h(n) in order to guarantee that A* is complete and optimal.
3. A* is optimally efficient for any given heuristic, meaning that no other optimal algorithm will expand fewer nodes than A*. So is A* the answer to all of our searching needs? Why or why not?

#### Local Search

1. What kinds of problem domains are better suited for a local search rather than a state space search? Give several examples.
2. What advantages do local searches have over state space search?
3. What features of the search space can lead to less than optimal solutions when local search is applied?
4. Describe the hill climbing algorithm. What are two improvements that can be added to hill climbing to make it more effective?
5. Describe the simulated annealing algorithm. How is it different from Hill Climbing?
6. Describe the beam search algorithm. How is it different than Hill Climbing and Simulated Annealing?
7. Let's apply local search to a map coloring problem. For example, consider the map shown below. We want to find a way to color the map such that no two countries that share a boundary are the same color. For instance A is adjacent to B, C, and E, but not D or F. Can we color this map with only 3 colors?
```	----------------------------------
|          A              |  E   |
----------------------------------
|   B  |         C        |      |
--------------------------|  F   |
|          D              |      |
----------------------------------
```
1. What is the size of this state space?
2. How would you represent states for this problem?
3. What is the objective function?
4. What is the successor function?
5. How many successors would each state have?

1. Consider the game tic-tac-toe. Assume that the current state of the game is as shown below, and it is X's turn.
```	X | O | O
---------
| X |
---------
|   | O
```
Draw the game tree starting from here, and show all of X's possible immediate next moves.
2. Consider the following static evaluator for tic-tac-toe. "Xn" is the number of rows, columns, or diagonals with exactly n X's and no O's. Similarly, "On" is the number of rows, columns, or diagonals with exactly n O's and no X's. Here is one possible evaluation function:
```((3 * X2) + X1) - ((3 * O2) + O1)
```
Using this evaluation function, determine the score of all of the leaves on the tree you created above.
3. Using the minimax algorithm with a depth bound of 1, which move would be selected?
4. List all of the important features of a good static evaluation function.
5. How are static evaluators similar to heuristics? How are they different?
6. What type of traversal does minimax do on a game tree?
7. Consider the following game tree, where P1 represents player1 and P2 represents player2. Using alpha-beta pruning, show the values of alpha and beta at each non-leaf node. Remember that MAX layers update alpha and MIN layers update beta.
```
------ P1 -----
/       |       \
--P2a      P2b     P2c--
/  |      /  |  \     |  \
P1d P1e   P1f P1g P1h  P1i  P1j
/|\  |\    /|  /|\  |\   /|  /|\
3 6 5 7 2  4 7 9 2 3 8 4 4 5 7 3 8
```
8. What move would be selected by alpha-beta pruning?

#### Monte Carlo Tree Search

1. When would this algorithm be preferred over minimax?
2. State the four main steps of MCTS, and describe what they do.
3. Consider the top-level of an MCTS tree shown below. Assume that MCTS has completed 10 rollouts and is ready to make a move. If the current node represents player -1, which move would be selected? If the current node represents player +1, which move would be selected?
```                                 current
visits:10
wins:5
losses:5
_ ____ value:1.0_____
/          |          \
A           B           C
visits:6    visits:3    visits:1
wins:4      wins:1      wins:0
losses:2    losses:1    losses:1
value:1.33  value:0.67  value:0.0

```
4. For each variable in the UCB formula (shown below), state its meaning. For each child of the current node, fill in the formula with the appropriate values (assume that the current node represents player +1 and that C=1).
```UCB = v + C*sqrt(ln(N)/n)
```
5. Assume that the current node is fully expanded and it represents player +1. How would the UCB formula be used to select one of the root's children?
6. The goal of using UCB is to both exploit good moves and to explore other moves that may prove to be even better. Explain how this formula accomplishes both of these goals, and discuss how the constant C could play a role.

7. The values associated with nodes in MCTS must account for three outcomes: win, lose, or draw. Explain how MCTS calculates value.