Artificial Intelligence
Course Code: ECE434


Heuristic Search Techniques




                         Suryender Kumar Sharma
                                 Suryender.16890@lpu.co.in
                          Asst. Prof. D: EEE - SEE- R & A
                              Lovely Professional University
Review
   General Purpose Problem Solving
   Production System
   State-Space Search
   Control Strategies
   Characteristics of Problems
   Exhaustive Search Techniques
    (BFS,DFS,DFID,BS)
   Analysis of Search Methods
Travelling Salesman Problem (TSP)

 Statement:
To find the shortest route of visiting all the
 cities once and returning back to the starting
 point.
Assume that there are n cities and distance
 between each pair of cities is given.
Factorial n-1 paths for n cities
Travelling Salesman Problem (TSP)

 Start Generating complete paths, keep track of
  the shortest path found so far


 Stop exploring any path as soon as its partial
  length becomes greater than the shortest path
  found so far
TSP
n=5
No of paths  (n-1)! = 4! = 24         C1

                          7                           11

                                   12
                                            15
            (C2)          20
                                                           C3
                              10                 13

                                   C4
                     12
                                                      17
                                   5


                                        C5
Using problem specific knowledge
 to aid searching
• Without incorporating
  knowledge into searching, one
  can have no bias (i.e. a           Search
                                     everywhere!!
  preference) on the search space.


• Without a bias, one is forced to
  look everywhere to find the
  answer. Hence, the complexity
  of uninformed search is
  intractable.
Using problem specific knowledge
to aid searching
• With knowledge, one can search the state space as if he was
  given “hints” when exploring a maze.
     – Heuristic information in search = Hints
• Leads to dramatic speed up in efficiency.
                                           A

                          B            C             D           E

                   F           G           H     I       J
Search only in
this subtree!!            K        L                 M       N


                                   O
More formally, why heuristic
 functions work?

• In any search problem where there are at most b choices at
  each node and a depth of d at the goal node, a naive search
  algorithm would have to, in the worst case, search around
  O(bd) nodes before finding a solution (Exponential Time
  Complexity).


• Heuristics improve the efficiency of search algorithms by
  reducing the effective branching factor from b to (ideally)
  a low constant b* such that
   – 1 =< b* << b
Heuristic Search Techniques

 General Purpose Heuristics:
  – Are useful in various problem domains
 Special purpose Heuristics:
  – Are domain specific
Heuristic Search Techniques
 General-purpose heuristics.
 Best-first search.
 Branch and bound search (uniform cost search).
 A* algorithm
 Hill climbing.
 Beam search.
General Purpose Heuristics

1. For combinatorial is nearest neighbor algorithms that
   work by selecting the locally superior alternative
2. Mathematical analysis is not possible to perform
         It is fun to see a program do something intelligent
        than to prove it
         AI Problem domain are complex, not possible to
        produce analytical proof that will work
          not possible to make statistical analysis of
        program behavior
Heuristic Functions
• A heuristic function is a function f(n) that gives an estimation on the “cost” of
  getting from node n to the goal state – so that the node with the least cost
  among all possible choices can be selected for expansion first.


• Three approaches to defining f:


    – f measures the value of the current state (its “goodness”)


    – f measures the estimated cost of getting to the goal from the current state:
         •   f(n) = h(n) where h(n) = an estimate of the cost to get from n to a goal


    – f measures the estimated cost of getting to the goal state from the current state and
      the cost of the existing path to it. Often, in this case, we decompose f:
         •   f(n) = g(n) + h(n) where g(n) = the cost to get to n (from initial state)
Approach 1: f Measures the
Value of the Current State
• Usually the case when solving optimization problems
   – Finding a state such that the value of the metric f is optimized


• Often, in these cases, f could be a weighted sum of a set of
  component values:

   – N-Queens
       • Example: the number of queens under attack …
Approach 2: f Measures the Cost to the
Goal


A state X would be better than a state Y if the estimated
  cost of getting from X to the goal is lower than that of Y
  – because X would be closer to the goal than Y


• 8–Puzzle
h1: The number of misplaced tiles
(squares with number).
h2: The sum of the distances of the tiles
from their goal positions.
Approach 3: f measures the total cost of the
solution path (Admissible Heuristic Functions)


• A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never
  overestimates the cost to reach the goal.
    – Admissible heuristics are “optimistic”: “the cost is not that much …”
• However, g(n) is the exact cost to reach node n from the initial state.
• Therefore, f(n) never over-estimate the true cost to reach the goal state
  through node n.
• Theorem: A search is optimal if h(n) is admissible.
    – I.e. The search using h(n) returns an optimal solution.
• Given h2(n) > h1(n) for all n, it’s always more efficient to use h2(n).
    – h2 is more realistic than h1 (more informed), though both are optimistic.
Traditional informed search
 strategies
• Greedy Best first search
   – “Always chooses the successor node with the best f value”
     where f(n) = h(n)
   – We choose the one that is nearest to the final state among all
     possible choices


• A* search
   – Best first search using an “admissible” heuristic function f
     that takes into account the current cost g
   – Always returns the optimal solution path
Informed Search Strategies

         Best First Search
Informed Search Strategies

          Greedy Search
        eval-fn: f(n) = h(n)
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                 I                  0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
                 Start
             A
Greedy Search: Tree Search
                              Start
                            A   75
              118

  [329]                      140      [374]   B
          C

                    [253]   E
Greedy Search: Tree Search
                               Start
                             A   75
               118

  [329]                       140      [374]   B
           C

                     [253]   E
                80                  99

   [193]                                           [178]
           G                                   F
                     [366]    A
Greedy Search: Tree Search
                               Start
                             A   75
               118

  [329]                       140         [374]   B
           C

                     [253]   E
                80                   99

   [193]                                              [178]
           G                                      F
                     [366]    A
                                                      211


                                  [253]                     I   [0]
                                            E

                                                       Goal
Greedy Search: Tree Search
                                           Start
                                         A   75
                          118

         [329]                            140         [374]   B
                   C

                                [253]    E
                           80                    99

           [193]                                                  [178]
                   G                                          F
                                 [366]    A
                                                                  211


                                              [253]                     I   [0]
                                                        E

                                                                   Goal
Path cost(A-E-F-I) = 253 + 178 + 0 = 431
dist(A-E-F-I) = 140 + 99 + 211 = 450
Greedy Search: Optimal ?
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                            C                 329
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
                                         dist(A-E-G-H-I) =140+80+97+101=418
Greedy Search: Complete ?
                           Start                             State         Heuristic: h(n)
                      A      75
           118                                                 A                 366
                      140            B                         B                 374
       C
111                                                           ** C               250
                      E
 D           80            99                                  D                 244
                                                               E                 253
            G                    F
                                                               F                 178
      97
                                                               G                 193
       H                   211                                 H                  98
           101                                                  I                 0
                  I
                          Goal           f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
                 Start
             A
Greedy Search: Tree Search
                              Start
                            A   75
              118

  [250]                      140      [374]   B
          C

                    [253]   E
Greedy Search: Tree Search
                                    Start
                                  A   75
                    118

    [250]                          140      [374]   B
                C

                          [253]   E
   111
 [244]      D
Greedy Search: Tree Search
                                            Start
                                          A   75
                            118

        [250]                              140      [374]   B
                    C

                                  [253]   E
    111
  [244]         D

                    Infinite Branch !
[250]      C
Greedy Search: Tree Search
                                              Start
                                            A   75
                              118

          [250]                              140      [374]   B
                      C

                                    [253]   E
        111
    [244]         D

                      Infinite Branch !
  [250]       C



[244]     D
Greedy Search: Tree Search
                                              Start
                                            A   75
                              118

          [250]                              140      [374]   B
                      C

                                    [253]   E
        111
    [244]         D

                      Infinite Branch !
  [250]       C



[244]     D
Greedy Search: Time and Space
Complexity ?
                           Start
                      A
           118               75          • Greedy search is not optimal.
                      140            B
       C                                 • Greedy search is incomplete
111
                      E                  without systematic checking of
 D           80            99
                                         repeated states.
            G                    F
                                         • In the worst case, the Time
      97                                 and Space Complexity of
       H                   211           Greedy Search are both O(bm)
           101                           Where b is the branching factor and m the
                  I                      maximum path length
                          Goal
Informed Search Strategies

     Branch and bound search
      Uniform Cost Search
            f(n)=g(n)
Uniform Cost Search (UCS)


                              5               2
                  [5]                                     [2]
              1               4                   1       7
       [6]                              [3]                     [9]
                        [9]
        Goal state
                                         4            5

 [x] = g(n)                       [7]                     [8]
 path cost of node n
Uniform Cost Search (UCS)


            5   2
      [5]            [2]
Uniform Cost Search (UCS)


             5         2
       [5]                     [2]

                           1   7
                 [3]                 [9]
Uniform Cost Search (UCS)


             5               2
       [5]                               [2]

                                 1       7
                       [3]                     [9]
                        4            5

                 [7]                     [8]
Uniform Cost Search (UCS)


                         5               2
             [5]                                     [2]
         1               4                   1       7
   [6]                             [3]                     [9]
                   [9]
                                    4            5

                             [7]                     [8]
Uniform Cost Search (UCS)


                          5               2
              [5]                                     [2]
Goal state
path cost 1               4                   1       7
g(n)=[6]
                                    [3]                     [9]
                    [9]
                                     4            5

                              [7]                     [8]
Uniform Cost Search (UCS)


                         5               2
             [5]                                     [2]
         1               4                   1       7
   [6]                             [3]                     [9]
                   [9]
                                    4            5

                             [7]                     [8]
Uniform Cost Search (UCS)

   In case of equal step costs, Breadth First search finds
    the optimal solution.

   For any step-cost function, Uniform Cost search
    expands the node n with the lowest path cost.

   UCS takes into account the total cost: g(n).

   UCS is guided by path costs rather than depths. Nodes
    are ordered according to their path cost.
Uniform Cost Search (UCS)

   Main idea: Expand the cheapest node. Where the cost is the path
    cost g(n).

   Implementation:
    Enqueue nodes in order of cost g(n).
    QUEUING-FN:- insert in order of increasing path cost.
    Enqueue new node at the appropriate position in the queue so that we
    dequeue the cheapest node.

   Complete? Yes.
    Optimal? Yes, if path cost is non decreasing function of depth
    Time Complexity: O(bd)
    Space Complexity: O(bd), note that every node in the fringe keep in
    the queue.
Branch and bound search
(uniform cost search).
• Cost function g(X) is designed that gives
  cumulative expense to the path from start
  node to the current node X.
• Least cost path obtained to far is expanded at
  each iteration till we reach to the Goal State
• There can be incomplete paths as the shortest
  one is always extended one level further.
• Can create as many new incomplete paths
Algorithm: Branch and Bound
Input: START and GOAL states
Local variables: OPEN, CLOSED, NODE, SUCCs, FOUND;
OUTPUT: Yes Or No
Method:
• Initially store the root node with g(root)=0 in a open list and CLOSED= and
FOUND= false;
• While(OPEN≠ and FOUND= false) do
{
• Remove the top element from OPEN and call it NODE;
• If NODE= GOAL Node then FOUND= true else
• {
• Put NODE in closed list:
• Find SUCC’s of NODE. If any and compute their ‘g’ values and store them in
OPEN list
• Sort all the nodes in the open list based on their cost function values:
}
} /* end of while*/
If FOUND= true then return Yes else return No
• Stop
Informed Search Strategies

            A* Search

         eval-fn: f(n)=g(n)+h(n)
A* (A Star)

• Greedy Search minimizes a heuristic h(n) which is an
  estimated cost from a node n to the goal state. However,
  although greedy search can considerably cut the search time
  (efficient), it is neither optimal nor complete.


• Uniform Cost Search minimizes the cost g(n) from the
  initial state to n. UCS is optimal and complete but not
  efficient.


• New Strategy: Combine Greedy Search and UCS to get an
  efficient algorithm which is complete and optimal.
A* (A Star)

• A* uses a heuristic function which combines
  g(n) and h(n): f(n) = g(n) + h(n)


• g(n) is the exact cost to reach node n from
  the initial state. Cost so far up to node n.


• h(n) is an estimation of the remaining cost to
  reach the goal.
A* (A Star)


             g(n)




f(n) = g(n)+h(n)       n




                    h(n)
A* Search
                           Start                           State        Heuristic: h(n)
                      A      75
           118                                              A                  366
                      140            B                      B                  374
       C
111                                                         C                  329
                      E
 D           80            99                               D                  244
                                                            E                  253
            G                    F
                                                             F                 178
      97
                                                            G                  193
       H                   211                              H                   98
           101                                               I                  0
                  I
                          Goal       f(n) = g(n) + h (n)
                      g(n): is the exact cost to reach node n from the initial state.
A* Search: Tree Search

              A   Start
A* Search: Tree Search

                     A   Start

               118               75
                     140

   [447]   C         E [393]          B [449]
A* Search: Tree Search

                            A   Start

               118                       75
                            140

   [447]   C                E [393]             B [449]
                       80     99

               [413]   G          F     [417]
A* Search: Tree Search

                                     A   Start

                   118                            75
                                     140

   [447]     C                       E [393]             B [449]
                                80     99

                   [413]    G              F     [417]

                           97
           [415]   H
A* Search: Tree Search

                                        A   Start

                      118                            75
                                        140

    [447]       C                       E [393]             B [449]
                                   80     99

                      [413]    G              F     [417]

                              97
            [415]     H

                    101
   Goal     I   [418]
A* Search: Tree Search

                                        A   Start

                      118                            75
                                        140

    [447]       C                       E [393]                     B [449]
                                   80     99

                      [413]    G              F     [417]

                              97
            [415]     H                               I     [450]

                    101
   Goal     I   [418]
A* Search: Tree Search

                                        A   Start

                      118                            75
                                        140

    [447]       C                       E [393]                     B [449]
                                   80     99

                      [413]    G              F     [417]

                              97
            [415]     H                               I     [450]

                    101
   Goal     I   [418]
A* Search: Tree Search

                                        A   Start

                      118                            75
                                        140

    [447]       C                       E [393]                     B [449]
                                   80     99

                      [413]    G              F     [417]

                              97
            [415]     H                               I     [450]

                    101
   Goal     I   [418]
A* with h() not Admissible

   h() overestimates the cost to reach
              the goal state
A* Search: h not admissible !
                           Start                            State         Heuristic: h(n)
                      A      75
           118                                                A                  366
                      140            B                        B                  374
       C
111                                                           C                  329
                      E
 D           80            99                                 D                  244
                                                              E                  253
            G                    F
                                                              F                  178
      97
                                                              G                  193
       H                   211                                H                  138
           101                                                I                   0
                  I
                          Goal       f(n) = g(n) + h (n) – (H-I) Overestimated
                              g(n): is the exact cost to reach node n from the initial state.
A* Search: Tree Search

              A   Start
A* Search: Tree Search

                     A   Start

               118               75
                     140

   [447]   C         E [393]          B [449]
A* Search: Tree Search

                            A   Start

               118                       75
                            140

   [447]   C                E [393]             B [449]
                       80     99

               [413]   G          F     [417]
A* Search: Tree Search

                                     A   Start

                   118                            75
                                     140

   [447]     C                       E [393]             B [449]
                                80     99

                   [413]    G              F     [417]

                           97
           [455]   H
A* Search: Tree Search

                                     A   Start

                   118                             75
                                     140

   [447]     C                       E [393]                      B [449]
                                80     99

                   [413]    G              F      [417]

                           97
           [455]   H                       Goal     I     [450]
A* Search: Tree Search

                                          A   Start

                        118                             75
                                          140

        [447]     C                       E [393]                      B [449]
                                     80     99

[473]                   [413]    G              F      [417]
        D

                                97
                [455]   H                       Goal     I     [450]
A* Search: Tree Search

                                          A   Start

                        118                             75
                                          140

        [447]     C                       E [393]                      B [449]
                                     80     99

[473]                   [413]    G              F      [417]
        D

                                97
                [455]   H                       Goal     I     [450]
A* Search: Tree Search

                                          A   Start

                        118                             75
                                          140

        [447]     C                       E [393]                      B [449]
                                     80     99

[473]                   [413]    G              F      [417]
        D

                                97
                [455]   H                       Goal     I     [450]
A* Search: Tree Search

                                          A   Start

                        118                             75
                                          140

        [447]     C                       E [393]                      B [449]
                                     80     99

[473]                   [413]    G              F      [417]
        D

                                97
                [455]   H                       Goal     I     [450]


            A* not optimal !!!
A* Search: Analysis
                                         •A* is complete except if there is an infinity of
                           Start         nodes with f < f(G).
                      A      75
           118
                                         •A* is optimal if heuristic h is admissible.
                      140            B
       C                                 •Time complexity depends on the quality of
111                                      heuristic but is still exponential.
                      E
                                         •For space complexity, A* keeps all nodes in
 D           80            99
                                         memory. A* has worst case O(bd) space
                                         complexity, but an iterative deepening version is
            G                    F       possible (IDA*).
      97

       H                   211
           101
                  I
                          Goal
A* Algorithm
1. Search queue Q is empty.
2. Place the start state s in Q with f value h(s).
3. If Q is empty, return failure.
4. Take node n from Q with lowest f value.
   (Keep Q sorted by f values and pick the first element).
5. If n is a goal node, stop and return solution.
6. Generate successors of node n.
7. For each successor n’ of n do:
         a) Compute f(n’) = g(n) + cost(n,n’) + h(n’).
         b) If n’ is new (never generated before), add n’ to Q.
         c) If node n’ is already in Q with a higher f value, replace it with
               current f(n’) and place it in sorted order in Q.
         End for
Informed Search Strategies

       Iterative Deepening A*
Iterative Deepening A*:IDA*


• Use f(N) = g(N) + h(N) with admissible and
  consistent h


• Each iteration is depth-first with cutoff on the
  value of f of expanded nodes
Consistent Heuristic
• The admissible heuristic h is consistent (or satisfies
  the monotone restriction) if for every node N and
  every successor N’ of N:

  h(N)  c(N,N’) + h(N’)                             N

                                              c(N,N’)

  (triangular inequality)                       N’       h(N)

• A consistent heuristic is admissible.          h(N’)
IDA* Algorithm

•    In the first iteration, we determine a “f-cost limit” – cut-off value
    f(n0) = g(n0) + h(n0) = h(n0), where n0 is the start node.


• We expand nodes using the depth-first algorithm and backtrack whenever
  f(n) for an expanded node n exceeds the cut-off value.

• If this search does not succeed, determine the lowest f-value among the
  nodes that were visited but not expanded.

• Use this f-value as the new limit value – cut-off value and do another depth-
  first search.

• Repeat this procedure until a goal node is found.
Hill Climbing
Input: START and GOAL states
Local variables: OPEN, CLOSED, NODE, SUCCs, FOUND;
OUTPUT: Yes Or No
Method:
• Initially store the root node in a open list (maintained as stack)
FOUND= false;
• While(OPEN≠ and FOUND= false) do
{
• Remove the top element from OPEN and call it NODE;
• If NODE= GOAL Node then FOUND= true else
{
• Find SUCC’s of NODE If any:
• Sort SUCC’s by estimated cost from NODE to goal state and add them to the
front of OPEN list:
} /* end of while*/
If FOUND= true then return Yes else return No
• Stop
Hill Climbing: Disadvantages
• Fail to find a solution
• Either Algo may terminate not by finding a
  goal state but by getting to a state from
  which no better state can be generated.


• This happen if program reached
   – Local maximum,
   – Plateau,
   – Ridge.
                                    86
87
Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its
  neighbours, but not better than some
  other states far away.




                                   88
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
  neighbouring states have the same value.
• requiring random walk




                                     89
Hill Climbing: Disadvantages
Ridge
Special kind of local maximum.
The orientation of the high region, compared
  to the set of available moves, makes it
  impossible to climb up.
Many moves executed serially may increase
 the height.


                                    90
Hill Climbing: Disadvantages
Ways Out
• Backtrack to some earlier node and try going in
  a different direction. (good way in dealing with
  local maxima)
• Make a big jump to try to get in a new section.
  (good way in dealing with plateaus)
• Moving in several directions at once. (good
  strategy for dealing with ridges)
                                        91
Hill Climbing: Disadvantages
• Hill climbing is a local method:
  Decides what to do next by looking only at the
  “immediate” consequences of its choices rather
  than by exhaustively exploring all the
  consequences.
• Global information might be encoded in
  heuristic functions.
Beam search
Input: START and GOAL states
Local variables: OPEN, NODE, SUCC’s, W_OPEN, FOUND;
OUTPUT: Yes Or No
Method:
• NODE= Root_node: FOUND= false:
If NODE= GOAL Node then FOUND= true else Find SUCC’s of NODE If any: with
estimated cost and store in OPEN list.
While(FOUND=False and not able to proceed further) do
{
• Sort OPEN list:
• Select top W elements from OPEN list and put it in W_OPEN list and empty
open list:
• For each node from W_OPEN list
{
• If NODE=Goal State then found = true else find SUCC’s of NODE if any with
its estimated cost and store in open list
}
} /* end of while*/
• If FOUND= true then return Yes else return No
• Stop
Beam search




         Continue till goal state is found
         or not able to proceed further
Constraint Satisfaction Problems
(CSPs)
• Standard search problem:
  – state is a "black box“ – any data structure that supports
    successor function, heuristic function, and goal test


• CSP:
  – state is defined by variables Xi with values from domain Di
  – goal test is a set of constraints specifying allowable
    combinations of values for subsets of variables
Constraint Satisfaction
• Constraint Satisfaction problems in AI have
  goal of discovering some problem state
  that satisfies a given set of constraints.

• Design tasks can be viewed as constraint
  satisfaction problems in which a design
  must be created within fixed limits on
  time, cost, and materials.
Constraint satisfaction
• Constraint satisfaction is a search procedure that operates in a space of
  constraint sets. The initial state contains the constraints that are
  originally given in the problem description. A goal state is any state
  that has been constrained “enough” where “enough”must be defined
  for each problem. For example, in cryptarithmetic, enough means that
  each letter has been assigned a unique numeric value.


• Constraint Satisfaction is a two step process:
    – First constraints are discovered and propagated as far as possible
      throughout the system.
    – Then if there still not a solution, search begins. A guess about
      something is made and added as a new constraint.


                                                          97
Constraint Satisfaction: Example
• Cryptarithmetic Problem:
             SEND
            +MORE
            -----------
            MONEY
Initial State:
• No two letters have the same value
• The sums of the digits must be as shown in the
  problem
Goal State:
• All letters have been assigned a digit in such a way that
  all the initial constraints are satisfied.
Cryptasithmetic Problem: Constraint
Satisfaction
•     The solution process proceeds in cycles. At each cycle, two
      significant things are done:
1.    Constraints are propagated by using rules that correspond to the
      properties of arithmetic.
2.    A value is guessed for some letter whose value is not yet
      determined.


A few Heuristics can help to select the best guess to try first:


•     If there is a letter that has only two possible values and other with
      six possible values, there is a better chance of guessing right on
      the first than on the second.
•     Another useful Heuristic is that if there is a letter that participates
      in many constraints then it is a good idea to prefer it to a letter
      that participates in a few.
Solving a Cryptarithmetic Problem
                          Initial state
                       M=1
                                                                 SEND
                       S= 8 or 9
                       O = 0 or 1 => O =0
                                                                +MORE
                       N= E or E+1 -> N= E+1
                       C2 = 1
                                                                -------------
                       N+R >8
                       E≠ 9
                                                                MONEY

                                            E=2
                         N=3
                         R= 8 or 9
                         2+D = Y or 2+D = 10 +Y
               C1= 0                                    C1= 1
  2+D =Y                                          2+D = 10 +Y
  N+R = 10+E                                      D = 8+Y
  R =9                                            D = 8 or 9
  S =8
                                     D=8                                     D=9
                          Y= 0 ; Conflict                         Y =1 ; Conflict
Constraint Satisfaction
Two-step process:
1. Constraints are discovered and
   propagated as far as possible.
2. If there is still not a solution, then search
   begins, adding new constraints.
Constraint Satisfaction
Two kinds of rules:
1. Rules that define valid constraint propagation.
2. Rules that suggest guesses when necessary.
When to Use Search Techniques

• The search space is small, and
  – There are no other available techniques, or
  – It is not worth the effort to develop a more efficient
    technique


• The search space is large, and
  – There is no other available techniques, and
  – There exist “good” heuristics
Conclusions

• Frustration with uninformed search led to the idea
  of using domain specific knowledge in a search so
  that one can intelligently explore only the relevant
  part of the search space that has a good chance of
  containing the goal state. These new techniques are
  called informed (heuristic) search strategies.


• Even though heuristics improve the performance of
  informed search algorithms, they are still time
  consuming especially for large size instances.

16890 unit 2 heuristic search techniques

  • 1.
    Artificial Intelligence Course Code:ECE434 Heuristic Search Techniques Suryender Kumar Sharma Suryender.16890@lpu.co.in Asst. Prof. D: EEE - SEE- R & A Lovely Professional University
  • 2.
    Review General Purpose Problem Solving  Production System  State-Space Search  Control Strategies  Characteristics of Problems  Exhaustive Search Techniques (BFS,DFS,DFID,BS)  Analysis of Search Methods
  • 3.
    Travelling Salesman Problem(TSP)  Statement: To find the shortest route of visiting all the cities once and returning back to the starting point. Assume that there are n cities and distance between each pair of cities is given. Factorial n-1 paths for n cities
  • 4.
    Travelling Salesman Problem(TSP) Start Generating complete paths, keep track of the shortest path found so far Stop exploring any path as soon as its partial length becomes greater than the shortest path found so far
  • 5.
    TSP n=5 No of paths (n-1)! = 4! = 24 C1 7 11 12 15 (C2) 20 C3 10 13 C4 12 17 5 C5
  • 6.
    Using problem specificknowledge to aid searching • Without incorporating knowledge into searching, one can have no bias (i.e. a Search everywhere!! preference) on the search space. • Without a bias, one is forced to look everywhere to find the answer. Hence, the complexity of uninformed search is intractable.
  • 7.
    Using problem specificknowledge to aid searching • With knowledge, one can search the state space as if he was given “hints” when exploring a maze. – Heuristic information in search = Hints • Leads to dramatic speed up in efficiency. A B C D E F G H I J Search only in this subtree!! K L M N O
  • 8.
    More formally, whyheuristic functions work? • In any search problem where there are at most b choices at each node and a depth of d at the goal node, a naive search algorithm would have to, in the worst case, search around O(bd) nodes before finding a solution (Exponential Time Complexity). • Heuristics improve the efficiency of search algorithms by reducing the effective branching factor from b to (ideally) a low constant b* such that – 1 =< b* << b
  • 9.
    Heuristic Search Techniques General Purpose Heuristics: – Are useful in various problem domains  Special purpose Heuristics: – Are domain specific
  • 10.
    Heuristic Search Techniques General-purpose heuristics.  Best-first search.  Branch and bound search (uniform cost search).  A* algorithm  Hill climbing.  Beam search.
  • 11.
    General Purpose Heuristics 1.For combinatorial is nearest neighbor algorithms that work by selecting the locally superior alternative 2. Mathematical analysis is not possible to perform  It is fun to see a program do something intelligent than to prove it  AI Problem domain are complex, not possible to produce analytical proof that will work  not possible to make statistical analysis of program behavior
  • 12.
    Heuristic Functions • Aheuristic function is a function f(n) that gives an estimation on the “cost” of getting from node n to the goal state – so that the node with the least cost among all possible choices can be selected for expansion first. • Three approaches to defining f: – f measures the value of the current state (its “goodness”) – f measures the estimated cost of getting to the goal from the current state: • f(n) = h(n) where h(n) = an estimate of the cost to get from n to a goal – f measures the estimated cost of getting to the goal state from the current state and the cost of the existing path to it. Often, in this case, we decompose f: • f(n) = g(n) + h(n) where g(n) = the cost to get to n (from initial state)
  • 13.
    Approach 1: fMeasures the Value of the Current State • Usually the case when solving optimization problems – Finding a state such that the value of the metric f is optimized • Often, in these cases, f could be a weighted sum of a set of component values: – N-Queens • Example: the number of queens under attack …
  • 14.
    Approach 2: fMeasures the Cost to the Goal A state X would be better than a state Y if the estimated cost of getting from X to the goal is lower than that of Y – because X would be closer to the goal than Y • 8–Puzzle h1: The number of misplaced tiles (squares with number). h2: The sum of the distances of the tiles from their goal positions.
  • 15.
    Approach 3: fmeasures the total cost of the solution path (Admissible Heuristic Functions) • A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never overestimates the cost to reach the goal. – Admissible heuristics are “optimistic”: “the cost is not that much …” • However, g(n) is the exact cost to reach node n from the initial state. • Therefore, f(n) never over-estimate the true cost to reach the goal state through node n. • Theorem: A search is optimal if h(n) is admissible. – I.e. The search using h(n) returns an optimal solution. • Given h2(n) > h1(n) for all n, it’s always more efficient to use h2(n). – h2 is more realistic than h1 (more informed), though both are optimistic.
  • 16.
    Traditional informed search strategies • Greedy Best first search – “Always chooses the successor node with the best f value” where f(n) = h(n) – We choose the one that is nearest to the final state among all possible choices • A* search – Best first search using an “admissible” heuristic function f that takes into account the current cost g – Always returns the optimal solution path
  • 17.
    Informed Search Strategies Best First Search
  • 18.
    Informed Search Strategies Greedy Search eval-fn: f(n) = h(n)
  • 19.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 20.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 21.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 22.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 23.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 24.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 25.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 26.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 27.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 28.
    Greedy Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 29.
    Greedy Search: TreeSearch Start A
  • 30.
    Greedy Search: TreeSearch Start A 75 118 [329] 140 [374] B C [253] E
  • 31.
    Greedy Search: TreeSearch Start A 75 118 [329] 140 [374] B C [253] E 80 99 [193] [178] G F [366] A
  • 32.
    Greedy Search: TreeSearch Start A 75 118 [329] 140 [374] B C [253] E 80 99 [193] [178] G F [366] A 211 [253] I [0] E Goal
  • 33.
    Greedy Search: TreeSearch Start A 75 118 [329] 140 [374] B C [253] E 80 99 [193] [178] G F [366] A 211 [253] I [0] E Goal Path cost(A-E-F-I) = 253 + 178 + 0 = 431 dist(A-E-F-I) = 140 + 99 + 211 = 450
  • 34.
    Greedy Search: Optimal? Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic dist(A-E-G-H-I) =140+80+97+101=418
  • 35.
    Greedy Search: Complete? Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 ** C 250 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = h (n) = straight-line distance heuristic
  • 36.
    Greedy Search: TreeSearch Start A
  • 37.
    Greedy Search: TreeSearch Start A 75 118 [250] 140 [374] B C [253] E
  • 38.
    Greedy Search: TreeSearch Start A 75 118 [250] 140 [374] B C [253] E 111 [244] D
  • 39.
    Greedy Search: TreeSearch Start A 75 118 [250] 140 [374] B C [253] E 111 [244] D Infinite Branch ! [250] C
  • 40.
    Greedy Search: TreeSearch Start A 75 118 [250] 140 [374] B C [253] E 111 [244] D Infinite Branch ! [250] C [244] D
  • 41.
    Greedy Search: TreeSearch Start A 75 118 [250] 140 [374] B C [253] E 111 [244] D Infinite Branch ! [250] C [244] D
  • 42.
    Greedy Search: Timeand Space Complexity ? Start A 118 75 • Greedy search is not optimal. 140 B C • Greedy search is incomplete 111 E without systematic checking of D 80 99 repeated states. G F • In the worst case, the Time 97 and Space Complexity of H 211 Greedy Search are both O(bm) 101 Where b is the branching factor and m the I maximum path length Goal
  • 43.
    Informed Search Strategies Branch and bound search Uniform Cost Search f(n)=g(n)
  • 44.
    Uniform Cost Search(UCS) 5 2 [5] [2] 1 4 1 7 [6] [3] [9] [9] Goal state 4 5 [x] = g(n) [7] [8] path cost of node n
  • 45.
    Uniform Cost Search(UCS) 5 2 [5] [2]
  • 46.
    Uniform Cost Search(UCS) 5 2 [5] [2] 1 7 [3] [9]
  • 47.
    Uniform Cost Search(UCS) 5 2 [5] [2] 1 7 [3] [9] 4 5 [7] [8]
  • 48.
    Uniform Cost Search(UCS) 5 2 [5] [2] 1 4 1 7 [6] [3] [9] [9] 4 5 [7] [8]
  • 49.
    Uniform Cost Search(UCS) 5 2 [5] [2] Goal state path cost 1 4 1 7 g(n)=[6] [3] [9] [9] 4 5 [7] [8]
  • 50.
    Uniform Cost Search(UCS) 5 2 [5] [2] 1 4 1 7 [6] [3] [9] [9] 4 5 [7] [8]
  • 51.
    Uniform Cost Search(UCS)  In case of equal step costs, Breadth First search finds the optimal solution.  For any step-cost function, Uniform Cost search expands the node n with the lowest path cost.  UCS takes into account the total cost: g(n).  UCS is guided by path costs rather than depths. Nodes are ordered according to their path cost.
  • 52.
    Uniform Cost Search(UCS)  Main idea: Expand the cheapest node. Where the cost is the path cost g(n).  Implementation: Enqueue nodes in order of cost g(n). QUEUING-FN:- insert in order of increasing path cost. Enqueue new node at the appropriate position in the queue so that we dequeue the cheapest node.  Complete? Yes.  Optimal? Yes, if path cost is non decreasing function of depth  Time Complexity: O(bd)  Space Complexity: O(bd), note that every node in the fringe keep in the queue.
  • 53.
    Branch and boundsearch (uniform cost search). • Cost function g(X) is designed that gives cumulative expense to the path from start node to the current node X. • Least cost path obtained to far is expanded at each iteration till we reach to the Goal State • There can be incomplete paths as the shortest one is always extended one level further. • Can create as many new incomplete paths
  • 54.
    Algorithm: Branch andBound Input: START and GOAL states Local variables: OPEN, CLOSED, NODE, SUCCs, FOUND; OUTPUT: Yes Or No Method: • Initially store the root node with g(root)=0 in a open list and CLOSED= and FOUND= false; • While(OPEN≠ and FOUND= false) do { • Remove the top element from OPEN and call it NODE; • If NODE= GOAL Node then FOUND= true else • { • Put NODE in closed list: • Find SUCC’s of NODE. If any and compute their ‘g’ values and store them in OPEN list • Sort all the nodes in the open list based on their cost function values: } } /* end of while*/ If FOUND= true then return Yes else return No • Stop
  • 55.
    Informed Search Strategies A* Search eval-fn: f(n)=g(n)+h(n)
  • 56.
    A* (A Star) •Greedy Search minimizes a heuristic h(n) which is an estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete. • Uniform Cost Search minimizes the cost g(n) from the initial state to n. UCS is optimal and complete but not efficient. • New Strategy: Combine Greedy Search and UCS to get an efficient algorithm which is complete and optimal.
  • 57.
    A* (A Star) •A* uses a heuristic function which combines g(n) and h(n): f(n) = g(n) + h(n) • g(n) is the exact cost to reach node n from the initial state. Cost so far up to node n. • h(n) is an estimation of the remaining cost to reach the goal.
  • 58.
    A* (A Star) g(n) f(n) = g(n)+h(n) n h(n)
  • 59.
    A* Search Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 98 101 I 0 I Goal f(n) = g(n) + h (n) g(n): is the exact cost to reach node n from the initial state.
  • 60.
    A* Search: TreeSearch A Start
  • 61.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449]
  • 62.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417]
  • 63.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [415] H
  • 64.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [415] H 101 Goal I [418]
  • 65.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [415] H I [450] 101 Goal I [418]
  • 66.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [415] H I [450] 101 Goal I [418]
  • 67.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [415] H I [450] 101 Goal I [418]
  • 68.
    A* with h()not Admissible h() overestimates the cost to reach the goal state
  • 69.
    A* Search: hnot admissible ! Start State Heuristic: h(n) A 75 118 A 366 140 B B 374 C 111 C 329 E D 80 99 D 244 E 253 G F F 178 97 G 193 H 211 H 138 101 I 0 I Goal f(n) = g(n) + h (n) – (H-I) Overestimated g(n): is the exact cost to reach node n from the initial state.
  • 70.
    A* Search: TreeSearch A Start
  • 71.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449]
  • 72.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417]
  • 73.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [455] H
  • 74.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [413] G F [417] 97 [455] H Goal I [450]
  • 75.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [473] [413] G F [417] D 97 [455] H Goal I [450]
  • 76.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [473] [413] G F [417] D 97 [455] H Goal I [450]
  • 77.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [473] [413] G F [417] D 97 [455] H Goal I [450]
  • 78.
    A* Search: TreeSearch A Start 118 75 140 [447] C E [393] B [449] 80 99 [473] [413] G F [417] D 97 [455] H Goal I [450] A* not optimal !!!
  • 79.
    A* Search: Analysis •A* is complete except if there is an infinity of Start nodes with f < f(G). A 75 118 •A* is optimal if heuristic h is admissible. 140 B C •Time complexity depends on the quality of 111 heuristic but is still exponential. E •For space complexity, A* keeps all nodes in D 80 99 memory. A* has worst case O(bd) space complexity, but an iterative deepening version is G F possible (IDA*). 97 H 211 101 I Goal
  • 80.
    A* Algorithm 1. Searchqueue Q is empty. 2. Place the start state s in Q with f value h(s). 3. If Q is empty, return failure. 4. Take node n from Q with lowest f value. (Keep Q sorted by f values and pick the first element). 5. If n is a goal node, stop and return solution. 6. Generate successors of node n. 7. For each successor n’ of n do: a) Compute f(n’) = g(n) + cost(n,n’) + h(n’). b) If n’ is new (never generated before), add n’ to Q. c) If node n’ is already in Q with a higher f value, replace it with current f(n’) and place it in sorted order in Q. End for
  • 81.
    Informed Search Strategies Iterative Deepening A*
  • 82.
    Iterative Deepening A*:IDA* •Use f(N) = g(N) + h(N) with admissible and consistent h • Each iteration is depth-first with cutoff on the value of f of expanded nodes
  • 83.
    Consistent Heuristic • Theadmissible heuristic h is consistent (or satisfies the monotone restriction) if for every node N and every successor N’ of N: h(N)  c(N,N’) + h(N’) N c(N,N’) (triangular inequality) N’ h(N) • A consistent heuristic is admissible. h(N’)
  • 84.
    IDA* Algorithm • In the first iteration, we determine a “f-cost limit” – cut-off value f(n0) = g(n0) + h(n0) = h(n0), where n0 is the start node. • We expand nodes using the depth-first algorithm and backtrack whenever f(n) for an expanded node n exceeds the cut-off value. • If this search does not succeed, determine the lowest f-value among the nodes that were visited but not expanded. • Use this f-value as the new limit value – cut-off value and do another depth- first search. • Repeat this procedure until a goal node is found.
  • 85.
    Hill Climbing Input: STARTand GOAL states Local variables: OPEN, CLOSED, NODE, SUCCs, FOUND; OUTPUT: Yes Or No Method: • Initially store the root node in a open list (maintained as stack) FOUND= false; • While(OPEN≠ and FOUND= false) do { • Remove the top element from OPEN and call it NODE; • If NODE= GOAL Node then FOUND= true else { • Find SUCC’s of NODE If any: • Sort SUCC’s by estimated cost from NODE to goal state and add them to the front of OPEN list: } /* end of while*/ If FOUND= true then return Yes else return No • Stop
  • 86.
    Hill Climbing: Disadvantages •Fail to find a solution • Either Algo may terminate not by finding a goal state but by getting to a state from which no better state can be generated. • This happen if program reached – Local maximum, – Plateau, – Ridge. 86
  • 87.
  • 88.
    Hill Climbing: Disadvantages Localmaximum A state that is better than all of its neighbours, but not better than some other states far away. 88
  • 89.
    Hill Climbing: Disadvantages Plateau Aflat area of the search space in which all neighbouring states have the same value. • requiring random walk 89
  • 90.
    Hill Climbing: Disadvantages Ridge Specialkind of local maximum. The orientation of the high region, compared to the set of available moves, makes it impossible to climb up. Many moves executed serially may increase the height. 90
  • 91.
    Hill Climbing: Disadvantages WaysOut • Backtrack to some earlier node and try going in a different direction. (good way in dealing with local maxima) • Make a big jump to try to get in a new section. (good way in dealing with plateaus) • Moving in several directions at once. (good strategy for dealing with ridges) 91
  • 92.
    Hill Climbing: Disadvantages •Hill climbing is a local method: Decides what to do next by looking only at the “immediate” consequences of its choices rather than by exhaustively exploring all the consequences. • Global information might be encoded in heuristic functions.
  • 93.
    Beam search Input: STARTand GOAL states Local variables: OPEN, NODE, SUCC’s, W_OPEN, FOUND; OUTPUT: Yes Or No Method: • NODE= Root_node: FOUND= false: If NODE= GOAL Node then FOUND= true else Find SUCC’s of NODE If any: with estimated cost and store in OPEN list. While(FOUND=False and not able to proceed further) do { • Sort OPEN list: • Select top W elements from OPEN list and put it in W_OPEN list and empty open list: • For each node from W_OPEN list { • If NODE=Goal State then found = true else find SUCC’s of NODE if any with its estimated cost and store in open list } } /* end of while*/ • If FOUND= true then return Yes else return No • Stop
  • 94.
    Beam search Continue till goal state is found or not able to proceed further
  • 95.
    Constraint Satisfaction Problems (CSPs) •Standard search problem: – state is a "black box“ – any data structure that supports successor function, heuristic function, and goal test • CSP: – state is defined by variables Xi with values from domain Di – goal test is a set of constraints specifying allowable combinations of values for subsets of variables
  • 96.
    Constraint Satisfaction • ConstraintSatisfaction problems in AI have goal of discovering some problem state that satisfies a given set of constraints. • Design tasks can be viewed as constraint satisfaction problems in which a design must be created within fixed limits on time, cost, and materials.
  • 97.
    Constraint satisfaction • Constraintsatisfaction is a search procedure that operates in a space of constraint sets. The initial state contains the constraints that are originally given in the problem description. A goal state is any state that has been constrained “enough” where “enough”must be defined for each problem. For example, in cryptarithmetic, enough means that each letter has been assigned a unique numeric value. • Constraint Satisfaction is a two step process: – First constraints are discovered and propagated as far as possible throughout the system. – Then if there still not a solution, search begins. A guess about something is made and added as a new constraint. 97
  • 98.
    Constraint Satisfaction: Example •Cryptarithmetic Problem: SEND +MORE ----------- MONEY Initial State: • No two letters have the same value • The sums of the digits must be as shown in the problem Goal State: • All letters have been assigned a digit in such a way that all the initial constraints are satisfied.
  • 99.
    Cryptasithmetic Problem: Constraint Satisfaction • The solution process proceeds in cycles. At each cycle, two significant things are done: 1. Constraints are propagated by using rules that correspond to the properties of arithmetic. 2. A value is guessed for some letter whose value is not yet determined. A few Heuristics can help to select the best guess to try first: • If there is a letter that has only two possible values and other with six possible values, there is a better chance of guessing right on the first than on the second. • Another useful Heuristic is that if there is a letter that participates in many constraints then it is a good idea to prefer it to a letter that participates in a few.
  • 100.
    Solving a CryptarithmeticProblem Initial state M=1 SEND S= 8 or 9 O = 0 or 1 => O =0 +MORE N= E or E+1 -> N= E+1 C2 = 1 ------------- N+R >8 E≠ 9 MONEY E=2 N=3 R= 8 or 9 2+D = Y or 2+D = 10 +Y C1= 0 C1= 1 2+D =Y 2+D = 10 +Y N+R = 10+E D = 8+Y R =9 D = 8 or 9 S =8 D=8 D=9 Y= 0 ; Conflict Y =1 ; Conflict
  • 101.
    Constraint Satisfaction Two-step process: 1.Constraints are discovered and propagated as far as possible. 2. If there is still not a solution, then search begins, adding new constraints.
  • 102.
    Constraint Satisfaction Two kindsof rules: 1. Rules that define valid constraint propagation. 2. Rules that suggest guesses when necessary.
  • 103.
    When to UseSearch Techniques • The search space is small, and – There are no other available techniques, or – It is not worth the effort to develop a more efficient technique • The search space is large, and – There is no other available techniques, and – There exist “good” heuristics
  • 104.
    Conclusions • Frustration withuninformed search led to the idea of using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies. • Even though heuristics improve the performance of informed search algorithms, they are still time consuming especially for large size instances.