Pac-Man seeks reward.
Should he eat or should he run?
When in doubt, q-learn.
In this project, you will implement value iteration and as an optional part of the project, you will implement q-learning. You will test your agents first on Gridworld (from class), then apply them to a simulated robot controller (Crawler) and Pac-Man.
The code for this project contains the following files, which are available in a zip archive:
||A value iteration agent for solving known MDPs.|
||Q-learning agents for Gridworld, Crawler and Pac-Man|
||A file to put your answers to questions given in the project.|
||Defines methods on general MDPs.|
||Defines the base classes
||The Gridworld implementation|
||Classes for extracting features on (state,action) pairs. Used for the approximate q-learning agent (in qlearningAgents.py).|
||Abstract class for general reinforcement learning environments. Used
||Gridworld graphical display.|
||Plug-in for the Gridworld text interface.|
||The crawler code and test harness. You will run this but not edit it.|
||GUI for the crawler robot.|
What to submit: You will fill in portions of
analysis.py during the assignment. You should submit only these files. Please don't change any others.
To get started, run Gridworld in manual control mode, which uses the arrow keys:
python gridworld.py -m
You will see the two-exit layout from class. The blue dot is the agent. Note that when you press up, the agent only actually moves north 80% of the time. Such is the life of a Gridworld agent!
You can control many aspects of the simulation. A full list of options is available by running:
python gridworld.py -h
The default agent moves randomly
python gridworld.py -g MazeGrid
You should see the random agent bounce around the grid until it happens upon an exit. Not the finest hour for an AI agent.
Note: The Gridworld MDP is such that you first must enter a
pre-terminal state (the double boxes shown in the GUI) and then take
the special 'exit' action before the episode actually ends (in the true
terminal state called
TERMINAL_STATE, which is not shown in
the GUI). If you run an episode manually, your total return may
be less than you expected, due to the discount rate (
-d to change; 0.9 by default).
Look at the console output that accompanies the graphical output (or use
-t for all text).
You will be told about each transition the agent experiences (to turn this off, use
As in Pac-Man, positions are represented by
(x,y) Cartesian coordinates
and any arrays are indexed by
the direction of increasing
y, etc. By default,
most transitions will receive a reward of zero, though you can change this
with the living reward option (
Question 1 (55 points) Write a value iteration agent in
ValueIterationAgent, which has been partially specified for you in
Your value iteration agent is an offline planner, not a reinforcement
agent, and so the relevant training option is the number of iterations
of value iteration it should run (option
-i) in its initial planning phase.
ValueIterationAgent takes an MDP on construction and runs value iteration for the specified number of iterations before the constructor returns.
Value iteration computes k-step estimates of the optimal values, Vk. In addition to running value iteration, implement the following methods for
ValueIterationAgent using Vk.
getValue(state)returns the value of a state.
getPolicy(state)returns the best action according to computed values.
getQValue(state, action)returns the q-value of the (state, action) pair.
These quantities are all displayed in the GUI: values are numbers in squares, q-values are numbers in square quarters, and policies are arrows out from each square.
Important: Use the "batch" version of value iteration where each vector Vk is computed from a fixed vector Vk-1 (like in lecture), not the "online" version where one single weight vector is updated in place. The difference is discussed in Sutton & Barto in the 6th paragraph of chapter 4.1.
Note: A policy synthesized from values of depth k (which reflect the next k rewards) will actually reflect the next k+1 rewards (i.e. you return πk+1). Similarly, the q-values will also reflect one more reward than the values (i.e. you return Qk+1). You may assume that 100 iterations is enough for convergence in the questions below.
The following command loads your
which will compute a policy and execute it 10 times. Press a key to
cycle through values, q-values, and the simulation. You should find
that the value of the start state (
V(start)) and the empirical resulting average reward are quite close.
python gridworld.py -a value -i 100 -k 10
Hint: On the default BookGrid, running value iteration for 5 iterations should give you this output:
python gridworld.py -a value -i 5
Your value iteration agent will be graded on a new grid. We will check your values, q-values, and policies after fixed numbers of iterations and at convergence (e.g. after 100 iterations).
Hint: Use the
util.Counter class in
which is a dictionary with a default value of zero. Methods such as
totalCount should simplify your code. However, be careful with
argMax: the actual argmax you want may be a key not in the counter!
Question 2 (10 point) On
with the default discount of 0.9 and the default noise of 0.2, the
optimal policy does not cross the bridge.
Change only ONE of the discount and noise parameters so that the optimal
policy causes the agent to attempt to cross the bridge. Put your
(Noise refers to how often an agent ends up in an unintended successor
state when they perform an action.) The default corresponds to:
python gridworld.py -a value -i 100 -g BridgeGrid --discount 0.9 --noise 0.2
Question 3 (35 points) Consider
DiscountGrid layout, shown below. This grid has two
terminal states with positive payoff (shown in green), a close exit
with payoff +1 and a distant exit with payoff +10. The bottom row of
the grid consists of terminal states with negative payoff (shown in
red); each state in this "cliff" region has payoff -10. The starting
state is the yellow square. We distinguish between two types of
paths: (1) paths that "risk the cliff" and travel near the bottom
row of the grid; these paths are shorter but risk earning a large
negative payoff, and are represented by the red arrow in the figure
below. (2) paths that "avoid the cliff" and travel along the top
edge of the grid. These paths are longer but are less likely to
incur huge negative payoffs. These paths are represented by the
green arrow in the figure below.
Give an assignment of parameter values for discount, noise, and
livingReward which produce the following optimal policy types or
state that the policy is impossible by returning the
'NOT POSSIBLE'. The default corresponds to:
python gridworld.py -a value -i 100 -g DiscountGrid --discount 0.9 --noise 0.2 --livingReward 0.0
question3e()should each return a 3-item tuple of (discount, noise, living reward) in
Note: You can check your policies in the GUI. For
example, using a correct answer to 3(a), the arrow in (0,1) should point
east, the arrow in (1,1) should also point east, and the arrow in (2,1)
should point north.
Note that your value iteration agent does not actually learn from experience. Rather, it ponders its MDP model to arrive at a complete policy before ever interacting with a real environment. When it does interact with the environment, it simply follows the precomputed policy (e.g. it becomes a reflex agent). This distinction may be subtle in a simulated environment like a Gridword, but it's very important in the real world, where the real MDP is not available.
Question 4 (20 points) You will now write a
q-learning agent, which does very little on construction, but instead
learns by trial and error from interactions with the environment through
update(state, action, nextState, reward) method. A stub of a q-learner is specified in
qlearningAgents.py, and you can select it with the option
'-a q'. For this question, you must implement the
getPolicy, you should break ties randomly for better behavior. The
random.choice() function will help. In a particular state, actions that your agent hasn't seen before still have a Q-value, specifically a Q-value of zero, and if all of the actions that your agent has seen before have a negative Q-value, an unseen action may be optimal.
Important: Make sure that you only access Q values by calling
getPolicy functions. This
abstraction will be useful for question 9 when you
getQValue to use features of state-action
pairs rather than state-action pairs directly.
With the q-learning update in place, you can watch your q-learner learn under manual control, using the keyboard:
python gridworld.py -a q -k 5 -mRecall that
-kwill control the number of episodes your agent gets to learn. Watch how the agent learns about the state it was just in, not the one it moves to, and "leaves learning in its wake."