- Goal Formulation-Set of one or more (desirable) world states.(eg.Checkmate in Chess)
- Problem Formulation-What actions and states to consider given a goal and an initial state
- Search for solution-Given the problem, search for a solution--a sequence of actions to achieve the goal starting from initial state
- Execution of the solution
Goal Formulation
Specify the objectives to
be achieved
goal - a set of desirable world states in which the objectives have been achieved
current / initial situation - starting point for the goal formulation
actions - cause transitions between world states
goal - a set of desirable world states in which the objectives have been achieved
current / initial situation - starting point for the goal formulation
actions - cause transitions between world states
Problem Formulation
Actions and states to
consider
states - possible world states
accessibility - the agent can determine via its sensors in which state it is
consequences of actions - the agent knows the results of its actions
levels - problems and actions can be specified at various levels
constraints - conditions that influence the problem-solving process
performance - measures to be applied
costs - utilization of resources
states - possible world states
accessibility - the agent can determine via its sensors in which state it is
consequences of actions - the agent knows the results of its actions
levels - problems and actions can be specified at various levels
constraints - conditions that influence the problem-solving process
performance - measures to be applied
costs - utilization of resources
Problem Types
Not all problems are created equal
single-state problem
multiple-state problem
contingency problem
exploration problem
multiple-state problem
contingency problem
exploration problem
Single-state problem
exact prediction is
possible
state - is known exactly after any sequence of actions
accessibility of the world all essential information can be obtained through sensors
consequences of actions are known to the agent
goal - for each known initial state, there is a unique
goal state that is guaranteed to be reachable via
an action sequence
simplest case, but severely restricted
state - is known exactly after any sequence of actions
accessibility of the world all essential information can be obtained through sensors
consequences of actions are known to the agent
goal - for each known initial state, there is a unique
goal state that is guaranteed to be reachable via
an action sequence
simplest case, but severely restricted
Example:
Vacuum world,
Limitations:
·
Can’t deal
with incomplete accessibility
·
incomplete
knowledge about consequences changes in the world
·
indeterminism
in the world, in action
Multiple-state problem
semi-exact prediction is
possible
state is not known exactly, but limited to a set of possible states after each action
accessibility of the world not all essential information can be obtained through sensors
reasoning can be used to determine the set of possible states
consequences of actions are not always or completely known to the agent; actions or the environment might exhibit randomness
goal due to ignorance, there may be no fixed action sequence that leads to the goal
less restricted, but more complex
state is not known exactly, but limited to a set of possible states after each action
accessibility of the world not all essential information can be obtained through sensors
reasoning can be used to determine the set of possible states
consequences of actions are not always or completely known to the agent; actions or the environment might exhibit randomness
goal due to ignorance, there may be no fixed action sequence that leads to the goal
less restricted, but more complex
Example:
Vacuum world, but the agent has no sensors
The action sequence right, suck, left, suck is guaranteed to reach the goal state from any initial state
Limitations:
The action sequence right, suck, left, suck is guaranteed to reach the goal state from any initial state
Limitations:
Can’t deal with changes in
the world during execution (“contingencies”)
Contingency problem
exact prediction is impossible
state unknown in advance, may depend on the outcome of actions and changes in the
environment
accessibility of the world some essential information may be obtained through sensors only at execution time
consequences of action may not be known at planning time
goal instead of single action sequences, there are trees of actions
contingency branching point in the tree of actions
agent design different from the previous two cases:the agent must act on incomplete plans
search and execution phases are interleaved
state unknown in advance, may depend on the outcome of actions and changes in the
environment
accessibility of the world some essential information may be obtained through sensors only at execution time
consequences of action may not be known at planning time
goal instead of single action sequences, there are trees of actions
contingency branching point in the tree of actions
agent design different from the previous two cases:the agent must act on incomplete plans
search and execution phases are interleaved
Example: Vacuum world, The effect
of a suck action is random.
There is no action sequence that can be calculated at planning time and is guaranteed to reach the goal state.
Limitations: Can’t deal with situations in which the environment or effects of action are unknown
There is no action sequence that can be calculated at planning time and is guaranteed to reach the goal state.
Limitations: Can’t deal with situations in which the environment or effects of action are unknown
Exploration problem
effects of actions are unknown
state the set of possible states may be unknown
accessibility of the world some essential information may be obtained through sensors only at execution time
consequences of actions may not be known at planning time
goal can’t be completely formulated in advance because states and consequences may not be
known at planning time
discovery what states exist
experimentation what are the outcomes of actions
learning remember and evaluate experiments
state the set of possible states may be unknown
accessibility of the world some essential information may be obtained through sensors only at execution time
consequences of actions may not be known at planning time
goal can’t be completely formulated in advance because states and consequences may not be
known at planning time
discovery what states exist
experimentation what are the outcomes of actions
learning remember and evaluate experiments
agent design different
from the previous cases: the agent must experiment
search requires search in the real world, not in an abstract model
realistic problems, very hard
search requires search in the real world, not in an abstract model
realistic problems, very hard
Performance Measuring
for problem solving
success
Has a solution been found?
quality
Is it a good solution?
What are the criteria?
optimal solution
may be difficult to find and not necessary
cost
sum of
• search cost (time, resources to find a solution)
• path cost (as defined above)
success
Has a solution been found?
quality
Is it a good solution?
What are the criteria?
optimal solution
may be difficult to find and not necessary
cost
sum of
• search cost (time, resources to find a solution)
• path cost (as defined above)
Really helpful assets there
ReplyDelete