CS63 Artificial Intelligence


Exam 1 Review

Intelligent Agents

  1. Define the term agent.
  2. Give an example of an agent and describe how it meets the criteria of the definition.
  3. List the properties environments can have.
  4. Which properties are associated with the most challenging environments? Give an example of such an environment.
  5. Which properties are associated with the simplest environments? Give an example of such an environment.

Solving Problems by Searching

  1. Define in your own words the following terms: state, state space, search tree, search node, goal state, and action.
  2. What kinds of problems can state space search be applied to? How do you formulate a problem so that this technique can be used?
  3. This is a slight variation on the water pitcher problem that we talked about in class. You have three pitchers, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You can fill the pitchers up to the top, pour them from one to another, or empty them onto the ground.
    • List all of the possible actions for this problem.
    • Describe how you would represent states for this problem.
    • Suppose that you need to measure exactly 1 gallon into the 3-gallon pitcher. Draw a portion of the state space that shows what series of actions and states would lead to this result.
  4. Using the basic state space search algorithm, we can do breadth-first search, depth-first search, greedy search, uniform cost search, or A*. How do you modify the algorithm to get these different search strategies?
  5. How do depth-limited search and iterative deepening search work?
  6. We compare search methods based on completeness, optimality, time complexity, and space complexity.
    • Define what it means for a search algorithm to be complete.
    • Define what it means for a search algorithm to be optimal
    • What are the advantages and disadvantages of breadth-first search?
    • What are the advantages and disadvantages of depth-first search?
    • When do uniform-cost search and breadth-first search perform equally well?
    • When taking completeness, optimality, space complexity and time complexity into account, which uninformed search method is the best choice? Explain why.
  7. Consider the graph given below. List the nodes, in order, that will be visited by the search algorithms as they try to find the goal node, labeled Z, beginning from the root node, labeled a.
                /-----a------\
               /              \
              /                \
            b1                  b2
            |                 /    \
            cx             c3        c4
            |             /   \     /   \
            dx           d5   d6   d7   d8 
            |           / \  / \  / \  / \ 
            Z           I J  K L  M N  O P
          
    • Depth-first search (assume children are added in right to left order).
    • Depth-bounded search, with bound 3
    • Iterative deepening search
    • Breadth-first search (assume children are added in left to right order)

Informed Search

  1. What is informed search? Define the term heuristic and describe how heuristics are used in informed search.
  2. In A* search we calculate f(n), which is the estimated cost of the cheapest solution through node n, and is given by the equation:
    	  f(n) = g(n) + h(n)
    	
    What are g(n) and h(n)? What restrictions must apply to h(n) in order to guarantee that A* is complete and optimal.
  3. A* is optimally efficient for any given heuristic, meaning that no other optimal algorithm will expand fewer nodes than A*. So is A* the answer to all of our searching needs? Why or why not?

Local Search

  1. What kinds of problem domains are better suited for a local search rather than a state space search? Give several examples.
  2. What advantages do local searches have over state space search?
  3. What features of the search space can lead to less than optimal solutions when local search is applied?
  4. Describe the hill climbing algorithm. What are two improvements that can be added to hill climbing to make it more effective?
  5. Describe the simulated annealing algorithm. How is it different from Hill Climbing?
  6. Describe the beam search algorithm. How is it different than Hill Climbing and Simulated Annealing?

Adversarial Search

  1. Consider the game tic-tac-toe. Assume that the current state of the game is as shown below, and it is X's turn.
    	X | O | O
    	---------
    	  | X |
            ---------
    	  |   | O   
          
    Draw the game tree starting from here, and show all of X's possible immediate next moves.
  2. Consider the following static evaluator for tic-tac-toe. "Xn" is the number of rows, columns, or diagonals with exactly n X's and no O's. Similarly, "On" is the number of rows, columns, or diagonals with exactly n O's and no X's. Here is one possible evaluation function:
    ((3 * X2) + X1) - ((3 * O2) + O1)
    
    Using this evaluation function, determine the score of all of the leaves on the tree you created above.
  3. Using the minimax algorithm with a depth bound of 1, which move would be selected?
  4. List all of the important features of a good static evaluation function.
  5. How are static evaluators similar to heuristics? How are they different?
  6. What type of traversal does minimax do on a game tree?
  7. Consider the following game tree, where P1 represents player1 and P2 represents player2. Using alpha-beta pruning, show the values of alpha and beta at each non-leaf node. Remember that MAX layers update alpha and MIN layers update beta.
       
                                 ------ P1 -----
                                /       |       \
                             --P2a      P2b     P2c--
                            /  |      /  |  \     |  \
                          P1d P1e   P1f P1g P1h  P1i  P1j
                         /|\  |\    /|  /|\  |\   /|  /|\
                        3 6 5 7 2  4 7 9 2 3 8 4 4 5 7 3 8 
    
  8. What move would be selected by alpha-beta pruning?

Monte Carlo Tree Search

  1. When would this algorithm be preferred over minimax?
  2. State the 4 main steps of MCTS, and describe what they do.
  3. Consider the top level of a MCTS tree shown below. Assume that the root node is fully expanded. How would the UCB formula be used to select one of the root's children?
          UCB = v + C*sqrt(ln(N)/n)
    
                                     root
                                   visits:9
                            -------value:0.6 --------
                           /       /         \       \
                          A       B          C        D
                    visits:2   visits:1   visits:2   visits:4
                    value:0.7  value:0.3  value:0.5  value:0.8
    
        
  4. For each variable in the formula, state its meaning. For each child of the root, fill in the formula with the appropriate values (assume that C=1).
  5. The goal of using UCB is to both exploit good moves and to explore other moves that may prove to be even better. Explain how this formula accomplishes both of these goals, and discuss how the constant C could play a role.

  6. The values associated with nodes in MCTS must account for three outcomes: win, lose, or draw. Explain how your MCTS implementation implements value.