Connect Four is a type of connection game, which begins with an empty rectangular board. Players take turns dropping pieces from the top of the board so that they land in the lowest empty spot in some column. A player who connects four of their pieces in a line (orthogonal or diagonal) wins. The examples below show the starting board, a board where white has won, and one where black has won (winning connections highlighted in blue).
------------- |· · · · · · ·| |· · · · · · ·| |· · · · · · ·| |· · · · · · ·| |· · · · · · ·| |· · · · · · ·| ------------- ------------- |· · · ● · · ·| |· · · ○ · · ·| |· · ● ● ○ · ·| |○ ● ○ ○ ● · ·| |● ○ ○ ○ ● · ·| |○ ○ ● ○ ● · ●| ------------- ------------- |● · ○ · ○ · ·| |● · ○ · ○ · ●| |● · ● ● ○ ● ●| |○ ● ○ ● ● ○ ○| |● ○ ● ○ ● ○ ●| |● ○ ○ ● ○ ○ ○| -------------In this lab you will create an AI Connect Four player.
cd ~/cs63/labs git clone git@github.swarthmore.edu:...
The objectives of this lab are to:
You will only need to modify one python file, MinMaxPlayers.py, butyou will also use the following files:
Try running ./ConnectFour.py -h to see the available command line options. Then try playing against a really terrible opponent by running:
./ConnectFour.py random human --show
Begin by completing the following steps:
Next focus on the MinMaxPlayers.py file, and complete the following steps:
bounded_min_max(node) if node's depth is at limit return staticEval(node's state) if node's state is terminal return staticEval(node's state) initialize best value get all possible moves from the node's state for each move determine the next state create a node for the next state # Recursive call value = bounded_min_max(next node) if value better than best value # depends on player update best value return best value
Come up with an improved static evaluator for Connect Four and implement it in the betterEval function.
Your goal is to create an evaluator that will beat the basicEval function when tested in a series of games using minimax players with equal depth limits.
Your static evaluator should:
Write a clear and thorough comment with your betterEval method to describe how it works. If you add helper functions, be sure to include comments describing these as well.
We will cover this pruning technique in class later this week. For now you can look ahead to Wednesday's reading, read the description of alpha-beta pruning in the Poole & Mackworth textbook, or check out the alpha-beta pruning Wikipedia article.
Note that alpha-beta pruning should always return the same scores as Minimax would, but it can potentially do so much more efficiently by cutting off search down branches that will not change the outcome of the search.
mkdir username1-username2_agent/ touch username1-username2_agent/__init__.py cp *.pyc username1-username2_agent/ tar -zcf username1-username2_agent.tar.gz username1-username2_agent/The result is a compressed directory called username1-username2_agent.tar.gz that contains your .pyc files. If you don't have pyc files for some of your code, just run python and import the file you want compiled. You should edit the script to actually reflect your usernames. If you have your friend's tarball (suppose it's called bwieden1-agent.tar.gz), you can run
tar -xzf bwieden1-agent.tar.gzTo get a directory with the same name: bwieden1-agent. Once you have this directory, you can add a line to your python code to import your friend's agent:
from bwieden1-agent import MinMaxPlayers.TournamentPlayer as FriendAgentand then test against their implementation.
None of the points for the assignment depend on outcomes in the tournament; it's strictly for fun. If you would like to opt out of the tournament, you may do so by sending me an email before the submission deadline.
git add MinMaxPlayers.py git commit -m "final version" git push