Games vs. searchproblems
• "Unpredictable" opponent specifying a
move for every possible opponent reply
• Time limits unlikely to find goal, must
approximate
5.
Minimax Search
• Coreof many computer games
• Pertains primarily to:
– Turn based games
– Two players
– Players with “perfect knowledge”
Game Tree
• Nodesare states
• Edges are decisions
• Levels are called “plys”
8.
Naïve Approach
• Givena game tree, what would be the
most straightforward playing approach?
• Any potential problems?
9.
Minimax
• Minimizing themaximum possible loss
• Choose move which results in best state
– Select highest expected score for you
• Assume opponent is playing optimally too
– Will choose lowest expected score for you
10.
Minimax
• Perfect playfor deterministic games
• Idea: choose move to position with highest minimax
value
= best achievable payoff against best play
• E.g., 2-ply game:
Properties of minimax
•Complete? Yes (if tree is finite)
• Optimal? Yes (against an optimal opponent)
• Time complexity? O(bm
)
• Space complexity? O(bm) (depth-first exploration)
• For chess, b ≈ 35, m ≈100 for "reasonable" games
exact solution completely infeasible
13.
Resource limits
Suppose wehave 100 secs, explore 104
nodes/sec
106
nodes per move
Standard approach:
• cutoff test:
e.g., depth limit (perhaps add quiescence search)
• evaluation function
= estimated desirability of position
14.
Evaluation Functions
• Assigna utility score to a state
– Different for players?
• Usually a range of integers
– [-1000,+1000]
• +infinity for win
• -infinity for loss
15.
Cutting Off Search
•How to score a game before it ends?
– You have to fudge it!
• Use a heuristic function to approximate
state’s utility
16.
Cutting off search
MinimaxCutoffis identical to MinimaxValue except
1. Terminal? is replaced by Cutoff?
2. Utility is replaced by Eval
Does it work in practice?
bm
= 106
, b=35 m=4
4-ply lookahead is a hopeless chess player!
– 4-ply ≈ human novice
– 8-ply ≈ typical PC, human master
– 12-ply ≈ Deep Blue, Kasparov
(A computer program which evaluates no further than its own legal moves plus the
legal responses to those moves is searching to a depth of two-ply. )
17.
Example Evaluation Function
•For chess, typically linear weighted sum of features
Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s)
• e.g., w1 = 9 with
f1(s) = (number of white queens) – (number of black
queens), etc.
18.
Evaluating States
• Assumingan ideal evaluation function,
how would you make a move?
• Is this a good strategy with a bad function?
19.
Look Ahead
• Insteadof only evaluating immediate
future, look as far ahead as possible
Complexity
• What isthe space complexity of depth-
bounded Minimax?
– Board size s
– Depth d
– Possible moves m
34.
Complexity
• What isthe space complexity of depth-
bounded Minimax?
– Board size s
– Depth d
– Possible moves m
• O(ds+m)
• Board positions can be released as bubble
up
Minimax Algorithm
• DidI just do your project for you?
• No!
• You need to create:
– Evaluation function
– Move generator
– did_i_win? function
38.
Isolation Clarification
• Standalonegame clients
– Opponent moves entered manually
– Output your move on stdout
• Assignment is out of 100
• Tournament is single elimination
– But there will be food!
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
42.
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
– Why is it called that?
43.
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
– Why is it called that?
• What is its space complexity?
44.
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
– Why is it called that?
• What is its space complexity?
• How can the Minimax algorithm be
simplified?
45.
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
– Why is it called that?
• What is its space complexity?
• How can the Minimax algorithm be
simplified?
– Will this work for all games?
46.
Recap
• What isa zero sum game?
• What is a game tree?
• What is Minimax?
– Why is it called that?
• What is its space complexity?
• How can the Minimax algorithm be
simplified?
– Will this work for all games?
47.
Next Up
• Recallthat minimax will produce optimal
play against an optimal opponent if entire
tree is searched
• Is the same true if a cutoff is used?
48.
Horizon Effect
• Youralgorithm searches to depth n
• What happens if:
– Evaluation(s) at depth n is very positive
– Evaluation(s) at depth n+1 is very negative
• Or:
– Evaluation(s) at depth n is very negative
– Evaluation(s) at depth n+1 is very positive
• Will this ever happen in practice?
Search Limitation Mitigation
•Sometimes it is useful to look deeper into
game tree
• We could peak past the horizon…
• But how can you decide what nodes to
explore?
– Quiescence search
51.
Quiescence Search
• Humanplayers have some intuition about
move quality
– “Interesting vs “boring”
– “Promising” vs “dead end”
– “Noisy” vs “quiet”
• Expand horizon for potential high impact
moves
• Quiescence search adds this to Minimax
52.
Quiescence Search
• Additionalsearch performed on leaf nodes
• if looks_interesting(leaf_node):
extend_search_depth(leaf_node)
else:
normal_evaluation(leaf_node)
53.
Quiescence Search
• Whatconstitutes an “interesting” state?
– Moves that substantially alter game state
– Moves that cause large fluctuations in
evaluation function output
• Chess example: capture moves
• Must be careful to prevent indefinite
extension of search depth
– Chess: checks vs captures
54.
Search Limitation Mitigation
•Do you always need to search the entire
tree?
– No!
• Sometimes it is useful to look less deeply
into tree
• But how can you decide what branches to
ignore?
– Tree pruning
55.
Tree Pruning
• Moveschosen under assumption of
optimal adversary
• You know the best move so far
• If you find a branch with a worse move, is
there any point in looking further?
• Thought experiment: bag game
Alpha-Beta Pruning
• DuringMinimax, keep track of two
additional values
• Alpha
– Your best score via any path
• Beta
– Opponent’s best score via any path
58.
Alpha-Beta Pruning
• Maxplayer (you) will never make a move
that could lead to a worse score for you
• Min player (opponent) will never make a
move that could lead to a better score for
you
• Stop evaluating a branch whenever:
– A value greater than beta is found
– A value less than alpha is found
59.
Why is itcalled α-β?
• α is the value of the
best (i.e., highest-
value) choice found
so far at any choice
point along the path
for max
• If v is worse than α,
max will avoid it
prune that branch
• Define β similarly for
min
Alpha-Beta Pruning
• Asthe search tree is traversed, the possible
utility value window shrinks as
– Alpha increases
– Beta decreases
63.
Alpha-Beta Pruning
• Oncethere is no longer any overlap in the
possible ranges of alpha and beta, it is safe
to conclude that the current node is a dead
end
Tree Pruning vsHeuristics
• Search depth cut off may affect outcome
of algorithm
• How about pruning?
82.
Move Ordering
• Doesthe order in which moves are listed
have any impact of alpha-beta?
83.
Move Ordering
• Techniquesfor improving move ordering
• Apply evaluation function to nodes prior to
expanding children
– Search in descending order
– But sacrifices search depth
• Cache results of previous algorithm
84.
Properties of α-β
•Pruning does not affect final result
• Good move ordering improves effectiveness of pruning
• With "perfect ordering," time complexity = O(bm/2
)
doubles depth of search
• A simple example of the value of reasoning about which
computations are relevant (a form of metareasoning)
85.
Deterministic games inpractice
• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a pre-computed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.
86.
Deterministic games inpractice
• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a pre-computed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.
• Chess: Deep Blue defeated human world champion Garry Kasparov
in a six-game match in 1997. Deep Blue searches 200 million
positions per second, uses very sophisticated evaluation, and
undisclosed methods for extending some lines of search up to 40
ply.
87.
Deterministic games inpractice
• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a pre-computed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.
• Chess: Deep Blue defeated human world champion Garry Kasparov
in a six-game match in 1997. Deep Blue searches 200 million
positions per second, uses very sophisticated evaluation, and
undisclosed methods for extending some lines of search up to 40
ply.
• Othello: human champions refuse to compete against computers,
who are too good.
88.
Deterministic games inpractice
• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a pre-computed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.
• Chess: Deep Blue defeated human world champion Garry Kasparov
in a six-game match in 1997. Deep Blue searches 200 million
positions per second, uses very sophisticated evaluation, and
undisclosed methods for extending some lines of search up to 40
ply.
• Othello: human champions refuse to compete against computers,
who are too good.
• Go: human champions refuse to compete against computers, who
are too bad. In go, b > 300, so most programs use pattern
knowledge bases to suggest plausible moves.
89.
Summary
• Games arefun to work on!
• They illustrate several important points
about AI
• perfection is unattainable must
approximate
• good idea to think about what to think
about