Categories
Intelligent systems

FIT5047 – Intelligent Systems Week 5

Taking a turn away from first order logic, search algorithms and planning, week 5 introduced the key issues around natural language processing [NLP] and the programming language Prolog.
The logic programming paradigm use by Prolog is something I have not learned about before. The development of axioms and problems solving by querying the axioms is the foundation of languages such as prolog. The engine of Prolog is a backward chaining theorem prover. The axioms in logic programming need to be Horn clauses: disjunctions of literals with exactly one positive literal

An example:

1
2
3
4
5
6
7
king(X) & greedy(X) → evil(X).

king(john).

greedy(john).

?evil(john).

In the  tutorial we were able to do some basic playing with a toy implementation by Lloyd Allison:

http://www.csse.monash.edu.au/˜lloyd/tildeLogic/Prolog.toy/

Prolog relies very heavily on unification, a process that we were actually unable to correctly re-inact in the tutorial.

p(X, c(X,Y)).
p(X, c(Y,Z)) <= p(X,Z)
?p(A, c(1,c(2,c(3,nil)))).

Prolog Answer:

p(1, c(1, c(2, c(3, nil)))) yes
p(2, c(1, c(2, c(3, nil)))) yes
p(3, c(1, c(2, c(3, nil)))) yes

After reading the tutorial solution, I am not really much clearer on the proves for each of these outcomes. I will have to follow up in the lecture.

NLP

We discussed the surface level methodologies for NLP:

  • Lexical analysis
  • Syntactic analysis
  • Semantic analysis
  • Pragmatic analysis

The focus of the lecture was however on the limitations of NLP. How ambiguity of words, their meaning and context makes effective NLP very difficult. Implications were another issue for NLP covered for some time.

Next came some approaches for overcoming the challenges of NLP. Statistical approaches such as N-Gram analysis. This veered the lecture into information retrieval , discussing the techniques used by search engines such as google to interpret searches.

On the topic of NLP I wondered if there were any large knowledge bases being assembled to try and assist in the task. Yahoo have a cluster of computers working on this:

http://en.wikipedia.org/wiki/Never-Ending_Language_Learning
Another interesting project:
http://en.wikipedia.org/wiki/Wolfram_Alpha

 

 

Categories
Adv. programming for DB apps.

FIT5059 – Adv. Prog. for DB Applications Week 5

PL/SQL continued in week 5 with some leaning towards integrating small PL/SQL programs into our Forms applications.

Procedures, Functions and triggers have now been added to our repertoire. An example of a simple function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE OR REPLACE FUNCTION patient_age
(Current_Patient PATIENT_DETAILS.Pname%TYPE)

RETURN NUMBER IS   PatientDOB  DATE;

CurrentAge  NUMBER;

BEGIN  SELECT PDOB INTO PatientDOB   FROM Patient_Details

WHERE PName = Current_Patient;

CurrentAge := TRUNC((SYSDATE - PatientDOB)/365);  RETURN CurrentAge;

END;

The query for running/viewing/debugging such a store procedure would be:

1
2
3
4
5
6
7
SELECT PName, patient_age(PName) AS Agefrom Patient_Details;

SELECT object_name FROM user_objects WHERE object_type='FUNCTION';

SELECT * FROM user_source WHERE name='PATIENT_AGE';

SELECT * FROM user_errors;

David Taniar made a goof portion of source code available on moodle for student, this will assist greatly.

The component that we have covered so far are not particularly complicated. However, when implementing the number of processes we need to remember is beginning to grow. Some practical revision of work done in the tutorials will be needed.

This weeks tutorial involved the implementation of a naive library program. For me, implementing the stored procedures that we had just learnt was the easy part. Remembering how to create a button to interact with a LOV object proved more illusive. I will endevour to find some time whilst at uni to do some revision of the LOV and button objects!

Categories
Natural computation for intell. sys.

FIT5167 – Natural Computation Week 5

Part 2 of the MLP lectures was completed in week 5. We ran through some extended examples including Batch and Online learning methods. The issue of local minimums and over fitting were also introduced along with some ways of overcoming the limitations they impose.

It turns out that batch learning is the most common method of learning. We ran through an example of proportionality using Mean Square Error [MSE] then a further example applying momentum.

batchlearning
The crux of batch learning

The concept and reasoning behind each operation in back-propagation and batch learning are quite clear, I definitely need to do some repetition to memorize the process for an exam condition however.

The next topic was network generalization, whereby the fitting of the model is relaxed. This ensures that noise and sample data patterns do not have a negative impact on the ability of a NN generated model to reflect further values.

Generalization is required for effective modelling

Other method for preventing over fitting, thrashing and intractable learning were:

  • Early stopping [set number of epochs]
  • Regularization/weight decay
  • Data normalization
  • More that will be covered in next weeks lecture

The tutorial enabled us to start using matlab. The nprtool and nntool were used to create neural networks which could then be exported and manual modified to specific requirements. I found matlab to be fairly easy to use, with exception for the plotting tools when I was unable to make what I wanted with.

 

Categories
Network security

FIT5044 – Network Security Week 5

Firewall was the topic of week 5’s lecture. We begun by discussing what the definition of a firewall is. We put it simply:

  • A firewall is a “choke point/guard box” of controlling and monitoring the network traffic.„
  • It allows interconnections between different networks with some level of trust.„
  • It imposes restrictions on network services (only authorized traffic is allowed).„
  • It enforces auditing and controlling access (alarms of abnormal behavior can be generated).„
  • It provides perimeter defence.

Ideally a firewall will block all ‘bad’ traffic whilst allowing all good traffic. Differentiating between good and bad traffic is a very difficult task.

firewall
An illustration of a typical firewall setup

Some slides were dedicated to the demilitarized zone [DMZ]. As shown above the DMZ is a sub network which is exposed to the internet. One would usually see servers such as web, email and DNS in the DMZ.

After running through the key components of firewall architecture, the lecture focussed on the importance of organisation structure and needs. Knowing which services are required, who should be able to use them and from which locations they can be used is necessary knowledge.

Some firewall types were also explored in the lecture notes:

  • Packet filtering [network layer]
  • Stateful packet filtering
  • Circuit level [transport layer]
  • Proxy firewalls [Application level]

Firewalllayers
Firewalls can operate on different layers of network comms

 

There was a lot more detail in the lecture notes which I look forward to hearing about in next weeks lecture.

Categories
Intelligent systems

FIT5047 – Intelligent Systems Week 4

The fourth week of Intelligent systems provided an introduction to Planning. The question that stands out in my mind from the tutorial is: what is the difference between problem solving via search and problem solving via planning? After running through the weeks material I hope to be able to answer that question.

First off the two learning objectives for the week (hmm seems to simple):

  • Goal stack planning
  • Partial order planning

What do we need to do goal based planning?

  • a world model
  • an action model
  • a problem solving strategy

Planning uses a divide and conquer strategy, allowing for sub goals to be identified. One must be careful to ensure that interrelation between sub goals is identified. This can reduce branching factors and assist in problems where heuristics are difficult to define.

Planning algorithms can work forwards or backwards, it seems that in most situations working backwards from the goal proves to be more efficient and further reduces branching factors.

Here is an example of a Goal stack planner from the lecture:

World model: Objects, states and goals

Action model: Operators

Problem solving strategy: Goal-stack planning

States are defined using first order logic, ie: AT(home) ΛHAVE(milk) ΛHAVE(bananas)

At this point we were introduced to the frame problem

The frame problem is that specifying only which conditions are changed by the actions do not allow, in logic, to conclude that all other conditions are not changed.

I don’t really understand why this is such an issue at present, the human brain does not reconfirm every fact after an action and it can still function effectively. Perhaps the reasoning this is considered such a big issue will become more apparent as I learn a bit more on the topic.

So, with an action in the goal stack planning systems [ using STRIPS] would appear as such:

>ACTION: GO(market)

>PRECONDITION: AT(home) ΛCAR-AT(home)

>ADD: AT(market), CAR-AT(market)

>DELETE: AT(home), CAR-AT(home)

From the goal state, one can define the preconditions, identifying which action is required to generate those conditions and work back until the initial state is reached. I do have some questions on how actions are selected and backtracking occurs. As in box world if B is picked up, why would it not be placed back on C (see lecture notes) unless there is an explored set.

After the box world example we moved onto Partial order Planning:

 

POPvsTOP
Partial order planning allows for concurrent actions

Partial order planning allows for less commitment in the search, reducing backtracking. I am still a little fuzzy on how this would be implemented so I will have to review the text book.

So, planning is clearly different from searching which simply applies valid operations to an initial state until it stumbles onto the goal state (this stumbling can be guided by heuristics).