Hill climbing

Hill climbing is a variant of generate-and-test in which feedback from the procedure is used to help the generator decide which direction to move in the search space. In a pure generate-and-test procedure, the test function responds with only a yes or no. But if the test function is augmented with a heuristic function that provides an estimate of how close a given state is to a goal state, the generate procedure can exploit it. This is particularly nice because often the computation of a heuristic function can be done at almost no cost at the same time that the test for a solution is being performed. Hill climbing is often used when a good heuristic function is available for evaluating states but when no other useful knowledge is available.

FOR EXAMPLE:
Suppose you are in an unfamiliar city without a map and you want to get downtown. You simply aim for the tall buildings. The heuristic function is just distance between the current location and the location of the tall buildings and the desirable states are those in which this distance is minimized. Getting downtown is an example of such a problem. For these problems, hill climbing can terminate whenever a goal state is reached.
For maximization (or minimization)problems, such as the traveling salesman problem. In these problems, there is no a priori goal state. For problems of this sort, it makes sense to terminate hill climbing when there is no reasonable alternative state to move to.

SIMPLE HILL CLIMBING:
The simplest way to implement hill climbing

ALGORITHM: SIMPLE HILL CLIMBING
1.       Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state.
2.       Loop until a solution is found or until there are no new operators left to applied in the current state:
a)      Select an operator that has not yet been applied to the current state and apply it to produce a new state
b)      Evaluate the new state,
                                             i.            If it is a goal state, then return it and quit.
                                           ii.            If it is not a goal state but it is better than the current state, then make it the current state.
                                          iii.            If it is not better than the current state, then continue in the loop.
The key difference between this algorithm and the one we gave for generate-and-test is the use of an evaluation function as a way to inject task-specific knowledge into the control process.

 EXAMPLE FOR HOW HILL CLIMBING WORKS: 

                    The puzzle of the four colored blocks. To solve the problem, we first need to define a heuristic function that describes how close a particular configuration is to being a solution. One such function is simply the sum of the number of different colors on each of the four sides. A solution to the puzzle will have a value of 16. Next we need to define a set of rules that describe ways of transforming one configuration into another. Actually, one rule will suffice. It says simply pick block and rotate it 90degrees in any direction. Having provided these definitions, the next step is to generate a starting configuration. This can either be done at random or with the aid of heuristic function. Now hill climbing can begin. We generate a new state by selecting a block and rotating it. If the resulting state is better, then we keep it. If not, we return to previous state and try a different perturbation.

STEEPEST-ASCENT HILL CLIMBING:

 A useful variation on simple hill climbing considers all the moves from the current state and selects the best one as the next state. This method is called steepest-ascent hill climbing or gradient search. Notice that this contrasts with the basic method in which the first state that is better than the current state is selected.

ALGORITHM: STEEPEST-ASCENT HILL CLIMBING

1.       Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state.
2.       Loop until a solution is found or until a complete iteration produces no change to current state:
a)      Let SUCC be a state such that any possible successor of the current state will be better then SUCC.
b)      For each operator that applies to the current state do:
                                             i.            Apply the operator and generate a new state.
                                    ii.            Evaluate the new state. If it is goal state, then return it and quit. If not, compare it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC alone.
c)       If the SUCC is better than current state, then set current state to SUCC.
To apply steepest-ascent hill climbing to the colored blocks, we must consider all perturbations of the initial state and choose the best. For this problem, this is difficult since there are so many possible moves. There is a trade-off between the time required to select a move (usually longer for steepest-ascent hill climbing) and the number of moves required to get a solution (usually longer for basic hill climbing) that must be considered when deciding which method will work better for a particular problem.
      Both basic and steepest-ascent hill climbing may fail to find a solution. Either algorithm may terminate not by finding a goal state but by getting to a state from which no better states can be generated. This will happen if the program has reached either a local maximum, a plateau, or a ridge.

LOCAL MAXIMUM:

 A local maximum is a state that is better than all its neighbors but is not better than some other states farther away. At a local maximum, all moves appear to make things worse. Local maxima are particularly frustrating because they often occur almost within sight of a solution. In this case, they called foothills.

PLATEAU:

A plateau is a flat area of the search space in which a whole set of neighboring states have the same value. On a plateau, it is not possible to determine the best direction in which to move by making local comparisons.

RIDGE:

A ridge is a special kind of local maximum. It is an area of the search space that is higher than surrounding areas and that itself has a slope. But the orientation of the high region, compared to the set of available moves and the directions in which they move, makes it impossible to traverse a ridge by single moves.
There are some ways of dealing with these problems, although these methods are by no means guaranteed:
·         Apply two or more rules before doing the test. This corresponds to moving in several directions at once. This is particularly good strategy for dealing with ridges.
Consider the blocks world problem. Assume the same operators (i.e., pick up one block and put it on the table; pick up one block and put it on another one) suppose we use the following heuristic function:                                                 

           

Local: Add one point for every block that is resting on the thing it is supposed to resting on. Subtract one point for every block that is sitting on the wrong thing.
Using this function, the goal state has a score of 8. The initial state has a score of 4 (since it gets one point added for blocks C,D,E,F,G and H and one point subtracted for blocks A and B). There is only one move from the initial state, namely to move block A to the table. That produces a state with a score of 6. The hill-climbing procedure will accept that move. From the new state, there are three possible moves, leading to the three states. These states have the score: (a) 4, (b) 4, and (c) 4. Hill climbing will halt because all these states have lower scores than the current state. The process has reached a local maximum that is not the global maximum.       


                     

SIMULATED ANNEALING:

Simulated annealing is a variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. The idea is to do enough exploration of the whole space early on so that the final solution is relatively insensitive to the starting state. This should lower the chances of getting caught at a local maximum, a plateau, or a ridge.
In order to be compatible with standard usage in discussions of simulated annealing. We make two notational changes for the duration of this section. We use the term objective function in place of the term heuristic function.
And we attempt to minimize rather than maximize the value of the objective function. Thus we actually describe a process of valley descending rather than hill climbing.

THREE DIFFERENCES FOR SIMULATED ANNEALING FROM THE SIMPLE HILL-CLIMBING PROCEDURE:

·         The annealing schedule must be maintained.
·         Moves to worse states may be accepted.
·         It is a good idea to maintain, in addition to the current state, the best state found so far. Then, if the final state is worse than that earlier state, the earlier state is still available.

ALGORITHM: SIMULATED ANNEALING

1.       Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state.
2.       Initialize BEST-SO-FAR to the current state.
3.       Initialize T according to the annealing schedule.
4.       Loop until a solution is found or until there are no new operators left to be applied in the current state.
a)      Select an operator that has not yet been applied to the current state and apply it to produce a new state.
b)      Evaluate the new state. Compare
                      ∆E= (value of current)—(value of new state)
·         If the new state is a goal state, then return it and quit.
·         If it is not a goal state but is better than the current state, then make it the current state. Also set BEST-SO-FAR to this new state.
·         If it is not better than the current state, then make it the current state with probability p’ as defined above. This step is usually implemented by invoking a random number generator to produce a number in the range [0, 1]. If that number is less than p’, then the move is accepted. Otherwise, do nothing.
·         Revise T as necessary according to the annealing schedule.
c)       Revise T as necessary according to the annealing schedule.
5.       Return BEST-SO-FAR, as the answer.

To implement this revised algorithm, it is necessary to select an annealing schedule, which has three components. The first is the initial value to be used for temperature. The second is the criteria that will be used to decide when the temperature of the system should be reduced. The third is the amount by which the temperature will be reduced each time it is changed. There may also be a fourth component of the schedule, namely, when to quit. Simulated annealing is often used to solve problems in which the number of moves from a given state is very large.
For such problem, it may not make sense to try all possible moves. Instead, it may be useful to exploit some criterion involving the number of moves that have been tried since an improvement was found.
                                

6 comments:

  1. Thank you so much for publishing this! I have a final tomorrow in AI and I appreciate you unique point of view. :)

    ReplyDelete
  2. Thanks for Sharing Quality information With us .... Aido | Aido Robot

    ReplyDelete
  3. Thanks a lot for the explanation

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete