Categories
Intelligent systems

FIT5047 – Intelligent Systems Week 6

Intelligent systems’ sixth week took a swing into soft computing and probabilistic systems. We had a quick introduction to probability theory which had the usual intuition breaking outcomes. The use of venn diagrams to explain parts of Kolmogorov’s Axioms were particularly useful. The definition of conditional probability did strike me a little of guard however:

 

Conditional Probability
Conditional Probability

Although in review this does seem much clearer. Given the knowledge of B [yellow] what is the probability of A [red].  As per the diagram and axiom, the answer is the intersection of A and B [green].

A revision of elementary probability reminded me that although at first glance it seems a trivial subject probability requires some use of the brain and calculator:

Suppose a “once in a century” flood has probability 0.01 of occuring in a year. How long do we expect to wait for one?

The answer:

(1 – p)^n = .5

log(1 – p)^n = log (.5)

n log(1 – p) = log (.5)

n = log .5 / log(1 – p)

= approx 69 years

Next came some discussion over notation and then, more importantly, and introduction to Bayes’ Theorm. A simple explination of Bayes’ theorm can be seen here:

 

Discussion then continued to some of the silly mistakes in probability theory that litter its past. I’m sure in 20  years many of the financial tools used in present day will appear on the lecture slides in the same category.

Kevin also made the time to add some material about the nature of probability. The suggestion made in Russell and Norvig is that probability is simply used to represent and agents believe state. If this is kept in mind then it is understandable why Bayesian networks have been such a boon over the past 15 years.

Leave a Reply

Your email address will not be published. Required fields are marked *