Categories
Intelligent systems

FIT5047 – Intelligent Systems Week 7

This week’s lecture delved further into probabilistic systems, specifically Bayesian networks. We explored the rationale behind the explosion in probabilistic systems in the past 15 years, namely the computational shortcuts achieved via the markov property and real world usefulness of  refining probabilities as known information increases.

We got much more detail about Bayesian network this week, including the components:

  • Random Variables [nodes, represented as binary, ordered, integral or continuous states]
  • Node links
  • Conditional Probability table [Quantifying the affect of parent on a  node]
  • The network is a direct, acyclic graph

Nice simple example of a Bayesian network

With the network seen above, causalities are indicated by the node links. If a node state changes from unknown to know, the Bayesian network can efficiently update the probabilities for all the other nodes. Note that the efficiency of a network relies on finding marginally independent nodes. If in this example all of the nodes were linked, the network would not be very effective.  The Markov property as defined by the lecture notes:

There are no direct dependencies in the system being modeled which are not explicitly shown via arcs

In essences when re-propagating probabilities after information is introduced the Markov model allows for encapsulation when completing calculations, speeding up the process.

With a network we can conduct belief updating, a form of inference. Essential to this process is identifying conditional independence of nodes, again this is closely associated to the Markov property. I will need to do some reading before going into more detail about that one. The lecture came to a close with an introduction to Netica, followed up in the tut with some practical experimentation.

 

Leave a Reply

Your email address will not be published. Required fields are marked *