- Throughout the history of AI, there have been two contrasting approaches:
**symbolic**and**subsymbolic**. The symbolic approach dominated ealry on, and the subsymbolic approach is currently dominant. Compare and contrast these two approaches to AI. - Rather than focusing solely on one of these approaches, why do some researchers advocate for an anarchy of methods?

- Define the term
*agent*. - Give an example of an agent and describe how it meets the criteria of the definition.
- Define
*artificial intelligence*in terms of agents. - AI agents can operate in environments that are fully or partially observable, single or multi-agent, deterministic or stochastic, static or dynamic, and discrete or continuous.
- Which properties are associated with the most challenging environments? Give an example of such an environment.
- Which properties are associated with the simplest environments? Give an example of such an environment.
- Solitaire is a card game which one can play by oneself. There are many variations, but all begin with a shuffled deck of cards that are then dealt out in various ways. Characterize solitaire based on the list of environmental properties given above.

- Define in your own words the following
terms:
*state*,*state space*,*search tree*,*search node*,*goal state*, and*action*. - What kinds of problems can state space search be applied to? How do you formulate a problem so that this technique can be used?
- This is a slight variation on the water pitcher problem that
we talked about in class. You have three pitchers, measuring 12
gallons, 8 gallons, and 3 gallons, and a water faucet. You can
fill the pitchers up to the top, pour them from one to another,
or empty them onto the ground.
- List all of the possible actions for this problem.
- Describe how you would represent states for this problem.
- Suppose that you need to measure exactly 1 gallon into the 3-gallon pitcher. Draw a portion of the state space that shows what series of actions and states would lead to this result.

- Using the basic state space search algorithm, we can do breadth-first search, depth-first search, greedy search, uniform cost search, or A*. How do you modify the algorithm to get these different search strategies?
- How do depth-limited search and iterative deepening search work?
- We compare search methods based on completeness, optimality,
time complexity, and space complexity.
- Define what it means for a search algorithm to
be
*complete*. - Define what it means for a search algorithm to
be
*optimal* - What are the advantages and disadvantages of breadth-first search?
- What are the advantages and disadvantages of depth-first search?
- When do uniform-cost search and breadth-first search perform equally well?
- When taking completeness, optimality, space complexity and time complexity into account, which uninformed search method is the best choice? Explain why.

- Define what it means for a search algorithm to
be
- Consider the graph given below. List the nodes, in order,
that will be visited by the search algorithms as they try to find the
goal node, labeled
**Z**, beginning from the root node, labeled**a**./-----a------\ / \ / \ b1 b2 | / \ cx c3 c4 | / \ / \ dx d5 d6 d7 d8 | / \ / \ / \ / \ Z I J K L M N O P

- Depth-first search

assume children are queued in right to left order - Breadth-first search
- Depth-bounded search, with bound 3
- Iterative deepening search

- Depth-first search

- What is informed search? Define the term
*heuristic*and describe how heuristics are used in informed search. - In A* search we calculate
`f(n)`, which is the estimated cost of the cheapest solution through node`n`, and is given by the equation:f(n) = g(n) + h(n)

What are`g(n)`and`h(n)`? What restrictions must apply to`h(n)`in order to guarantee that A* is complete and optimal. - A* is optimally efficient for any given heuristic, meaning that no other optimal algorithm will expand fewer nodes than A*. So is A* the answer to all of our searching needs? Why or why not?

- What kinds of problem domains are better suited for a local search rather than a state space search? Give several examples.
- What advantages do local searches have over state space search?
- What features of the search space can lead to less than optimal solutions when local search is applied?
- Describe the hill climbing algorithm. What are two improvements that can be added to hill climbing to make it more effective?
- Describe the simulated annealing algorithm. How is it different from Hill Climbing?
- Describe the beam search algorithm. How is it different than Hill Climbing and Simulated Annealing?
- Let's apply local search to a map coloring problem. For
example, consider the map shown below. We want to find a way to
color the map such that no two countries that share a boundary are
the same color. For instance A is adjacent to B, C, and E, but not D
or F. Can we color this map with only 3 colors?
---------------------------------- | A | E | ---------------------------------- | B | C | | --------------------------| F | | D | | ----------------------------------

- What is the size of this state space?
- How would you represent states for this problem?
- What is the objective function?
- What is the successor function?
- How many successors would each state have?

- Consider the game tic-tac-toe. Assume that the current state of the game is as shown below, and it is X's turn.
X | O | O --------- | X | --------- | | O

Draw the game tree starting from here, and show all of X's possible immediate next moves. - Consider the following static evaluator for tic-tac-toe.
"Xn" is the number of rows, columns, or diagonals with exactly n X's
and no O's. Similarly, "On" is the number of rows, columns, or
diagonals with exactly n O's and no X's. Here is one possible
evaluation function:
((3 * X2) + X1) - ((3 * O2) + O1)

Using this evaluation function, determine the score of all of the leaves on the tree you created above. - Using the minimax algorithm with a depth bound of 1, which move would be selected?
- List all of the important features of a good static evaluation function.
- How are static evaluators similar to heuristics? How are they different?
- What type of traversal does minimax do on a game tree?
- Consider the following game tree, where P1 represents player1 and
P2 represents player2. Using alpha-beta pruning, show the values of
alpha and beta at each non-leaf node. Remember that MAX layers update
alpha and MIN layers update beta.
------ P1 ----- / | \ --P2a P2b P2c-- / | / | \ | \ P1d P1e P1f P1g P1h P1i P1j /|\ |\ /| /|\ |\ /| /|\ 3 6 5 7 2 4 7 9 2 3 8 4 4 5 7 3 8

- What move would be selected by alpha-beta pruning?

- When would this algorithm be preferred over minimax?
- State the four main steps of MCTS, and describe what they do.
- Consider the top-level of an MCTS tree shown below. Assume that
MCTS has completed 10 rollouts and is ready to make a move. If the
current node represents player -1, which move would be selected? If
the current node represents player +1, which move would be selected?
current visits:10 wins:5 losses:5 _ ____ value:1.0_____ / | \ A B C visits:6 visits:3 visits:1 wins:4 wins:1 wins:0 losses:2 losses:1 losses:1 value:1.33 value:0.67 value:0.0

- For each variable in the UCB formula (shown below), state its
meaning. For each child of the current node, fill in the formula
with the appropriate values (assume that the current node
represents player +1 and that C=1).
UCB = v + C*sqrt(ln(N)/n)

- Assume that the current node is fully expanded and it represents player +1. How would the UCB formula be used to select one of the root's children?
- The goal of using UCB is to both
*exploit*good moves and to*explore*other moves that may prove to be even better. Explain how this formula accomplishes both of these goals, and discuss how the constant`C`could play a role. - The values associated with nodes in MCTS must account for three outcomes: win, lose, or draw. Explain how MCTS calculates value.