Bayesian Network



Bayesian network reflects the states of some part of the world that is being modeled and it describes how those states are related by probabilities.Consider the following figure
There is an agent which recognizes a world,and it perceives that the world has mountains,roads,sun and so on.The world’s model has been recognized with the help of knowledge representation.Now,how can an agent says that there will be a possibility for the occurrence of accidents on bend road.In that case probabilities comes in.Is there any belief for the statements delivered by an agent,in that case there is a need of Bayesian network which purely works based on conditional probability. Well, typically some states will tend to occur more frequently when other states are present. Thus, if you are sick, the chances of a runny nose are higher. If it is cloudy, the chances of rain are higher, and so on. Here is a simple Bayes net that illustrates these concepts. In this simple world, let us say the weather can have three states: sunny, cloudy, or rainy, also that the grass can be wet or dry, and that the sprinkler can be on or off. Now there are some causal links in this world. If it is rainy, then it will make the grass wet directly. But if it is sunny for a long time, that too can make the grass wet, indirectly, by causing us to turn on the sprinkler.
"if the lawn is wet, what are the chances it was caused by rain or by the sprinkler", and "if the chance of rain increases, how does that affect my having to budget time for watering the lawn".
In this case,conditional probability helps to predict the cause for the lawn wetness.let’s have a short discussion about conditional probability rule.
P(A|B) =  P(A∩B) / P(B)
In words: the probability of event A given that event B happened is equal to the intersection of the two events divided by the probability of B.
we will use the following simple medical diagnosis problem.

Example problem:

Suppose we want to use the diagnostic assistant to diagnose whether there is a fire in a building based on noisy sensor information and possibly conflicting explanations of what could be going on. The agent receives a report about whether everyone is leaving the building. Suppose the report sensor is noisy: It sometimes reports leaving when there is no exodus (a false positive), and it sometimes does not report when everyone is leaving (a false negative). Suppose the fire alarm going off can cause the leaving, but this is not a deterministic relationship. Either tampering or fire could affect the alarm. Fire also causes smoke to rise from the building.
Suppose we use the following variables, all of which are Boolean, in the following order:
  • Tampering is true when there is tampering with the alarm.
  • Fire is true when there is a fire.
  • Alarm is true when the alarm sounds.
  • Smoke is true when there is smoke.
  • Leaving is true if there are many people leaving the building at once.
  • Report is true if there is a report given by someone of people leaving. Report is false if there is no report of leaving.
The variable Report denotes the sensor report that people are leaving. This information is unreliable because the person issuing such a report could be playing a practical joke, or no one who could have given such a report may have been paying attention. This variable is introduced to allow conditioning on unreliable sensor data. The agent knows what the sensor reports, but it only has unreliable evidence about people leaving the building. As part of the domain, assume the following conditional independencies:
  • Fire is conditionally independent of Tampering (given no other information).
  • Alarm depends on both Fire and Tampering. That is, we are making no independence assumptions about how Alarm depends on its predecessors given this variable ordering.
  • Smoke depends only on Fire and is conditionally independent of Tampering and Alarm given whether there is a Fire.
  • Leaving only depends on Alarm and not directly on Fire or Tampering or Smoke. That is, Leaving is conditionally independent of the other variables given Alarm.
  • Report only directly depends on Leaving.

This network represents the factorization
P(Tampering,Fire,Alarm,Smoke,Leaving,Report)

=
P(Tampering) ×P(Fire) ×P(Alarm|Tampering,Fire)


×P(Smoke|Fire) ×P(Leaving|Alarm) ×P(Report|Leaving).
We also must define the domain of each variable. Assume that the variables are Boolean; that is, they have domain {true,false}. We use the lower-case variant of the variable to represent the true value and use negation for the false value. Thus, for example, Tampering=true is written as tampering, and Tampering=false is written as ¬tampering.



The examples that follow assume the following conditional probabilities:
The following conditional probabilities follow from the model,
P(tampering ) = 0.02
P(fire) = 0.01
P(report ) = 0.03
P(smoke) = 0.02
P(alarm ) = 0.03
P(leaving) = 0.02

Now ,observe the values by making Tampering as true
P(alarm|tampering) = 0.85
P(leaving|alarm) = 0.74
P(report|leaving) = 0.56

Let us make the fire as true along with tampering,

P(alarm|tampering ^ fire) = 0.50
P(leaving|alarm) = 0.44
P(report|leaving) = 0.34

As we all know that the leaving and report is depend on alarm and conditionally independent of Tampering and Smoke.To prove this let us make the alarm as false given tampering and fire
From the figure it is clear that the leaving and report is depend on alarm not on fire and tampering.By this way we can conclude with the following probability assumptions
P(tampering) = 0.02
P(fire) = 0.01
P(alarm | fire
tampering) = 0.5
P(alarm | fire
¬tampering) = 0.99
P(alarm | ¬fire
tampering) = 0.85
P(alarm | ¬fire
¬tampering) = 0.0001
P(smoke | fire ) = 0.9
P(smoke | ¬fire ) = 0.01
P(leaving | alarm) = 0.88
P(leaving | ¬alarm ) = 0.001
P(report | leaving ) = 0.75
P(report | ¬leaving ) = 0.01

This example illustrates how the belief net independence assumption gives commonsense conclusions and also demonstrates how explaining away is a consequence of the independence assumption of a belief network. Bayesian learning methods are firmly based on probability theory and exploit advanced methods developed in statistics.Naïve Bayes is a simple generative model that works fairly well in practice.A Bayesian network allows specifying a limited set of dependencies using a directed graph.Inference algorithms allow determining the probability of values for query variables given values for evidence variables.

4 comments: