Categories
Adv. programming for DB apps.

FIT5059 – Adv. Prog. for DB Applications Week 4

Unfortunately I was absent for week 4’s lecture and tutorial. My review of the week will be limited to the printed material. PL/SQL was the topic of week 4.

Ok, so to start with I have not used PL/SQL before so it is worth defining; a procedural programming language developed by Oracle as an extension for their relational databases. It allows for complex applications to further leverage the DB layer.

The general structure of PL/SQL:

DECLARE

<variable declarations>

BEGIN

<program statements>

EXCEPTION

<error handling statements>

END;

Any oracle datatypes can be used (CHAR, VARCHAR2, NUMBER, etc).

Constructs covered in the lecture were:

  • IF/THEN/ELSE
  • Loops, pretest and posttest
  • Cursor (implicit[select,from,where] and explicit[FOR DroomRow IN DCursor LOOP])
  • Exception(pre-defined, undefined, user defined) and error handling

Triggers is an interesting topic, I have always been of the opinion that this sort of procedure should be in the application layer (aside from logging). The syntax:

CREATE OR REPLACE TRIGGER trigger_name

{BEFORE|AFTER|INSTEAD OF]}

{INSERT|UPDATE|DELETE}

[OF <attribute_name>] ON Table_name

[FOR EACH ROW] [WHEN (condition)]

BEGIN

Trigger_body

END;

I will need to work through the tutorial work to get some practice with this material!

 

Categories
Natural computation for intell. sys.

FIT5167 – Natural Computation Week 4

Natural computation, week number 4 -> Multilayer perceptron [MLP] for non-linear data analysis.

MLP is one of several neural networks for non-linear modelling but is the most popular, hence our focus on it.

MLP’s are very popular because they can model any non-linear pattern. Everything sounds great if we compare an MLP network to the single layer networks we discussed earlier. However, the learning of the MLP network is quite a bit more complex do to the distorted relationship between hidden layers and the output error. Also discussed previously, neural network learn [supervised learning] by adjusting the weights applied to inputs to neurons. The weight adjust must be connected to the output error, the only way to back propagate is through differentiation.

 

We need to relate w1...wn to the output error

At this point it is worth noting that the activation function for MLP neurons must be continuous so as to enable backward chaining differentiation.

 

See the notation for this backward chaining example

Now we need to find the error gradient with respect to b(output neuron weights) and a(hidden neuron weights). After conduction the first round of differentiation:

diff1
first round

Now for the hidden layer:

diff2
completion of backward chaining to hidden layer

In the tutorial following we completed this process in excel to see the learning process.

I will be uploading a copy of this once I confirm it is correct.