Natural computation’s second week made me feel quite a bit more comfortable with the subject as to the first week.  We discussed artificial neural networks in more specific and relevant detail. Starting of were the main components:

  • Inputs (X1, X2, X3…Xn)
  • Weights(Wj1,  Wj2, Wj3…Wjn)
  • Bias (Bj)
  • Neuron (summation function)
  • Transfer/Transformation/Activation function
    • Threshhold/McCulloch-Pitts function [0,1]
    • Sigmoid function {Yj = 1/(1 + e^(-AUj))}
  • Output

Next came an explanation of the 3 fundamental neural network architectures:

  • Single Layer Feedforward – see image below


simple feed forward neural network

  • Multilayer Feedforward [Multilayer perceptron, MLP] –  a layer of hidden neurons not seen by input or output. This hidden layer of neurons allows for higher order of computation for complex and detailed data sets.
MLP network with hidden layer
  • Recurrent (feedback) networks – What brought neural networks back to life,
Feedback neural network

One point that I am still a little fuzzy on is the formalization of inputs to Neural networks. It was mentioned in the lecture that no characteristic that is not present (ie: sequence) should be represented. But male/female could equally be represented as [bool] or [bool,bool]. I think starting to use MatLab will clarify this point for me.

Another point that I feel uncomfortable with is time series prediction. The application to the stock market was mentioned but I believe that applying pattern recognition to growing systems such as the stock market could be logically flawed. Black swan days have many causes, most of which have not been seen before. If the pattern of a black swan day has been seen in the data set before, it can not be included in the output.

In any case, this week I hope to start using MatLab and look forward to running some regression analysis 😀