Types of Environment


                 
 Environment I
Fully observable (accessible) vs. partially observable (inaccessible):

·         Fully observable if agents sensors detect all aspects of environment relevant to choice of action

·         Could be partially observable due to noisy, inaccurate or missing sensors, or inability to measure everything that is needed

·         Model can keep track of what was sensed previously, cannot be sensed now, but is probably still true.

·         Often, if other agents are involved, their intentions are not observable, but their actions are

·         E.g

·         chess – the board is fully observable, as are opponent’s moves.

·         Driving – what is around the next bend is not observable (yet).
             
 Environment II
Deterministic  vs. stochastic (non-deterministic):

·         Deterministic = the next state of the environment is completely predictable from the current state and the action executed by the agent

·         Stochastic = the next state has some uncertainty associated with it

·         Uncertainty could come from randomness, lack of a good environment model, or lack of complete sensor coverage

·         Strategic environment if the environment is deterministic except for the actions of other agents

Examples:
Non-deterministic environment
: physical world: Robot on Mars
Deterministic environment: Tic Tac Toe game 

Environment III
Episodic  vs. sequential:

·         The agent's experience is divided into atomic  "episodes" (each episode consists of the agent perceiving and then performing a single action) and the choice of action in each episode depends only on the episode itself

·         Sequential if current decisions affect future decisions, or rely on previous ones

·         Examples of episodic are expert advice systems – an episode is a single question and answer

·         Most environments (and agents) are sequential

·          Many are both – a number of episodes containing a number of sequential steps to a conclusion  

Examples:
Episodic environment:
mail sorting system Non-episodic environment: chess game 
Environment IV
Discrete vs. continuous:
·         Discrete = time moves in fixed steps, usually with one measurement per step (and perhaps one action, but could be no action). E.g. a game of chess

·         Continuous = Signals constantly coming into sensors, actions continually changing. E.g. driving a car


Environment V
Static  vs. dynamic:

·         Dynamic if the environment may change over time.

·         Static if nothing (other than the agent) in the environment changes

·         Other agents in an environment make it dynamic

·         The goal might also change over time

·         Not dynamic if the agent moves from one part of an environment to another, though it has a very similar effect

·         E.g. – Playing football, other players make it dynamic, mowing a lawn is static (unless there is a cat...), expert systems usually static (unless knowledge changes)


Environment VI

Single agent vs. multi agent:

·         An agent operating by itself in an environment is single agent!

·         Multi agent is when other agents are present!

·         A strict definition of an other agent is anything that changes from step to step.

·         A stronger definition is that it must sense and act

·         Competitive or co-operative Multi-agent environments

·         Human users are an example of another agent in a system

·         E.g. Other players in a football team (or opposing team), wind and waves in a sailing agent, other cars in a taxi drive

 Lets have a true or false session-->Click Here

























2 comments: