Categories
Natural computation for intell. sys.

FIT5167 – Natural Computation Week 4

Natural computation, week number 4 -> Multilayer perceptron [MLP] for non-linear data analysis.

MLP is one of several neural networks for non-linear modelling but is the most popular, hence our focus on it.

MLP’s are very popular because they can model any non-linear pattern. Everything sounds great if we compare an MLP network to the single layer networks we discussed earlier. However, the learning of the MLP network is quite a bit more complex do to the distorted relationship between hidden layers and the output error. Also discussed previously, neural network learn [supervised learning] by adjusting the weights applied to inputs to neurons. The weight adjust must be connected to the output error, the only way to back propagate is through differentiation.

 

We need to relate w1...wn to the output error

At this point it is worth noting that the activation function for MLP neurons must be continuous so as to enable backward chaining differentiation.

 

See the notation for this backward chaining example

Now we need to find the error gradient with respect to b(output neuron weights) and a(hidden neuron weights). After conduction the first round of differentiation:

diff1
first round

Now for the hidden layer:

diff2
completion of backward chaining to hidden layer

In the tutorial following we completed this process in excel to see the learning process.

I will be uploading a copy of this once I confirm it is correct.

 

Leave a Reply

Your email address will not be published. Required fields are marked *