Algorithms

Unit 1

Breadth First Search

  1. Create a variable called NODE-LIST and set it to the initial state.
  2. Loop until the goal state is found or NODE-LIST is empty.
    1. Remove the first element, say E, from the NODE-LIST. If NODE-LIST was empty then quit.
    2. For each way that each rule can match the state described in E do:
    i) Apply the rule to generate a new state.
    ii) If the new state is the goal state, quit and return this state.
    iii) Otherwise add this state to the end of NODE-LIST

Depth First Search

1.If the initial state is a goal state, quit and return success.
2.Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor of the initial state. If there is no successor, signal failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise continue in this loop.


Hill Climbing Algorithm(Informed Search)

function HILL-CLIMBING(problem) returns a solution state
         inputs: problem, a problem
         static: current, a node
                 next, a node
         current <— MAKE-NODE(INITIAL-STATE[problem])
         loop do
                 next— a highest-valued successor of current
                 if VALUE[next] < VALUE[current] then return current
                 current *—next
         end

Constraint Satisfaction Problem

Until a complete solution is found or until all paths have led to lead ends, do

1. select an unexpanded node of the search graph.

2. Apply the constraint inference rules to the selected node to generate all possible new constraints.

3. If the set of constraints contains a contradiction, then report that this path is a dead end.

4. If the set of constraints describes a complete solution then report success.

5. If neither a constraint nor a complete solution has been found then apply the rules to generate new partial solutions. Insert these partial solutions into the search graph.

Unit II

Min Max Problem

function minimax(board, depth, isMaximizingPlayer):

    if current board state is a terminal state :
        return value of the board
    
    if isMaximizingPlayer :
        bestVal = -INFINITY 
        for each move in board :
            value = minimax(board, depth+1, false)
            bestVal = max( bestVal, value) 
        return bestVal

    else :
        bestVal = +INFINITY 
        for each move in board :
            value = minimax(board, depth+1, true)
            bestVal = min( bestVal, value) 
        return bestVal

Alpha Beta Pruning

    function Max-Value(state, game, œ, ßreturns the mimimax value of state
      inputs:
        state, current state in the game
        game, game description
        œ, the best score for MAX along the path to state
        ß, the best score for MIN along the path to state
      if CUTOFF-TEST(statethen return EVAL(state)
      for each s in SUCCESSORS(statedo
        œ <-- MAX(œ,MIN-VALUE(s,game,œ,ß)) if œ >= ß then return ß
      end
      return œ


    function Min-Value(state, game, œ, ßreturns the mimimax value of state
      if CUTOFF-TEST(statethen return EVAL(state)
      for each s in SUCCESSORS(statedo
        ß <-- MIN(ß,MAX-VALUE(s,game,œ,ß)) if ß <= œ then return œ
      end

5 comments: