Advanced Network Security

FIT5037 – Advanced Network Security Week 9

‘Network security and performance’ marked the ninth week of FIT5037. This is a logical extension of the previous weeks lecture of organizational level network security. There has traditionally been a mutual exclusivity between speed and security. This is most definitely a sore spot for many organizations, particularly when finding a degradation in performance after investing money! The lecture looked at common techniques that should be used to ensure convenience is not disproportional affected by security efforts. The notes outlined four key topics for the week:

  • Load balancing and firewalls
  • VPN and network performance
  • Network address translation [NAT] and load balancing
  • Network security architecture

Key awareness issues that were recurring through the lecture:

  • Security! – Does a software/hardware/architecture solution or combination of these provide sufficient security
  • Speed and availability – Do security solutions allow for the required level of service availability for operational requirements? Is service speed affected to an unacceptable extent?
  • Robustness – If one component fails, what are the repercussion for the rest of the network in terms of previous issues?
Example of adjustments to design in consideration to organisational concerns (source: notes10)

The diagram above illustrates how the adoption of load balancers and multiple parallel firewalls suffices speed and robustness requirements.

The lecture went on to introduce the topics of protocol security and certain VPN solutions.

IT Research Methods

FIT5185 – IT Research Methods Week 9

The final lecture on quantitative data analysis covered4 specific statistical test:

  • Binomial – Given a weighted coin, how many heads will probably result from 30 tosses
  • Median – Checks that the medians of two populations are not significantly different
  • Mood’s median test – Checks for significant similarity between unrelated samples (non-parametric)
  • Kilmogorov-Smirnov – Measure the cumulative difference between data, are the data sets different?
  • Friedman – Testing for significant differences across testing intervals on a sample population
The lecture slides included clear examples of these tests. The tutorial followed up with some practical examples using SPSS. After the 4 weeks of quantitative data analysis we now have a decent toolbox specifically for non-parametric data analysis. Our assignment requires application of these tools. I imagine that the assignment will give lease to some of the ambiguities that arise when reasoning from quantitative analysis.
An example of non-parametric data (source:
Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 8

The final post on attack reviews will delve into physical denial of service attacks via network intrusion. Physical attacks can be carried out by attackers gaining access to location where physical systems are stored but this attack method extends beyond the scope of this reading unit. Physical attacks via a networks generally involve maliciously modifying vulnerable firmware in an attempt to create further vulnerabilities/render hardware temporarily unavailable/ permanently disable (brick) targeted hardware.

This type of attack can be referred to by a few different names:

  • Phlashing
  • Permanent DoS / PDoS
  • Bricking
  • Firmware attack

Rich Smith of HP labs outlined this vulnerability in his 2008 presentation of a tool called PhlashDance.

In the presentation Rich looks:

  • Achieving PDoS remotely
  • Possibility of generic attacks – Which would significantly increase the likelihood of attackers creating tools, allowing almost anyone to exploit a firmware vulnerability.
  • Mitigation

Taking an abstract look at firmware development in industry we can see that it is generally behind system software. For example it is not uncommon to patch drivers, in fact Windows does this quite regularly. Updating firmware is much less common. Thus there is a great deal more legacy code and code that was not developed with security in mind. Given these facts the chances of vulnerabilities are high. Smith goes on to highlight the lack of auditing for firmware vulnerabilities and fact that most security policies over look this as a system component. This combines with the emergence of network devices that are connected to networks automatically updating firmware.

Another great point that Smith makes is the very weak access control of many devices firmware when weighed against the power re-flash access provides. The introduction to firmware is closed with definitions of the two major firmware update mechanisms:

  • Push – Firmware is sent to the device
  • Pull – Firmware update is signaled to the device which then connect to a designated location to collect the new binary

These update mechanisms are the main target for attackers who wish to maliciously modify a devices firmware.

Phlashdance - automated firmware vulnerability tester (Rich Smith, 2008)

Smith look at the lack of cryptographic data verification as the primary weakness in automatic firmware update packages. He implements a fuzzer to overcome the cyclical redundancy checks implemented by most vendors.


The presentation recommends the following mitigation efforts by developers:

  • Remote updates off by default
  • Physical presence required to flash firmware
  • Crypto signatures required to flash
  • Validation in firmware, not client application
  • Design with attack tolerance not fault tolerance

The following is also recommended for users:

  • Patch firmware
  • Lock down devices
  • Understand the full capabilities of devices and take their security seriously

For an administrator of a large network implementing intrusion detection rules that can identify malicious firmware updates would also be an ideal solutions. Taking note of the ports that firmware updates will also allow for locking down of devices behind the firewall.

Advanced Network Security

FIT5037 – Advanced Network Security Week 8

Taking a more abstract view on computer security, week 8’s topic was computer security for large networks. This first part of the lecture discussed risk analysis. Some key steps in conducting risk analysis:

  • Value of assets being protected – if attacks break into our network what is the worst case scenario? This value is constantly rising in today’s business environment. This step will also establish a budget range for system security, there is no point spending 1 million protecting a system that contains information and assets worth one hundred thousand.
  • Threat identification – What are the known threats to our system? This could include likely attackers, the types of known exploits and an understanding of what possible unknown exploits may be capable of.
  • Identification of key system components:
Some key components (source: Week 9 lecture notes)
  • Define each step in the security life cycle – Prevention -> Detection -> Response -> Recovery
  • Specifying policy areas for People, Processes and Tools
  • Begin development of security policy using a logical framework: Organizational -> Security Architecture -> Technical
  • Design, implementation and testing of chosen security tools:
Some security tools (source: Week 9 lecture notes)
  • Audit any security systems in place at set time periods (ie: once a year)
  • Understand that organization requirements can change quickly and that the security policy is in place to protect organizations whilst allowing them to operate as unhindered as possible, there is no point having a completely secure systems that takes employees 2 hours to gain access to.

Design of system wide security policies may come off as a more managerial, less technical operation. However, to implement a good security policy, decision makers must be aware of and have an in depth understanding of the available tools, threats from attackers and the organizational requirements. I would be very surprised if most vulnerabilities were as a direct result of technical issues rather than holes as a result of poorly designed and implemented security policies.

IT Research Methods

FIT5185 – IT Research Methods Week 8

Probability, hypothesis testing and regression analysis continued the topic of quantitative analysis in week 8.  Our discussion on the statistic techniques that we are using with the SPSS package focuses on the interpretation of outputs rather than the mathematics behind them. This seems reasonable given the limited time we have assigned to such a large area.

The first points covered were definitions of probability:

  • Marginal (simple) probability – rolling 3 six in a row with a standard dice => (1/6) x (1/6) x (1/6)
  • Joint probability P(AB) => P(A) x P(B)
  • Conditional Probability – I would stick with Bayes theorem => see below
Conditional Probability
Conditional Probability
  • Binomial Distribution – probability of a number times and event occurs given a true or false outcome and n trials. ie: how many times will head appear in 20 tosses of a coin.
  • Normal (Gaussian) distribution – Requires continuous random variables (ie age), see below
Normal distribution demands the percentages show for each standard deviation interval

Hypothesis testing and Regression analysis followed. The recurring theme is the significance value of less then 0.05 required for hypothesis support.

SPSS seems like a great tool for statistical analysis with all of the statistic methods widely used and relatively simple use.

Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 7

This weeks DoS attack review will focus on wireless vulnerabilities, specifically as a result of replay attacks. The simple definition of which is:

A network attack whereby valid data transmission is maliciously or fraudulently repeated or delayed

Replays attacks are simple but in many case very effective (source: Fent et al. 2007)

A key article used in this post is: Feng Z., Ning, J., Broustis, I., Pelechrinis, K., Krishnamurthy, S. V., Faloutsos, M.,  2008?, Coping with Packet Replay Attacks in Wireless Networks, US Army Research Office

Replay attacks are particularly effective against wireless networks as the capture and injection of packets is much easier to accomplish as opposed to a wired network. Aireplay-ng is a linux tool that enables replay attacks to be conducted on unprotected wireless network very simply. This tool is used in conjunction with packetforge-ng which allows attackers to easily create new or forged packets for injection. Feng et al. cite network degradation via one terminal against an access point of up to 61%. That degradation is a achieved through unintelligent packet spamming. Also mentioned is the straight forward mitigation strategy of using public key encryption to digitally sign packets although this is indeed a slow process for data comms.

Using packet replay, there are a number of attacks that can be launched:

  • Simplistic packet replay to increase network congestion.
  • De-authentication – This attack sends disassociate packets to one or more clients which are currently associated with a particular access point.

Mitigation strategies:

  • One time passwords
  • Session tokens
  • Random check numbers
  • Timestamping
  • RADIUS [Remote Authentication Dial In User Service] server
  • EAP [Extensible Authentication Protocol]

As per advanced network security lectures this post will focus on analyzing how a RADIUS and EAP prevent replay attacks. The RADIUS protocol documentation lists a Digest-nonce count attribute as does the EAP protocol specification.

Through the handshake process nonce values are used by both the AP and the supplicant to protect against replay attacks:

When using EAP nonce values are used to establish sessions keys safe from replay attacks

I need to do further reading as to the process post key handshake. I would imagine that an encrypted counter could be used to prevent effectivness of replay attacks.

Advanced Network Security

FIT5037 – Advanced Network Security Week 7

Week 7 jumped away from snort and on to wireless communications. The lecture slides was particularly detailed, the key enhancements to be covered:

  • TKIP – Temporal Key Integrity Protocol
  • LEAP – Lightweight Extensible Authentication Protocol (according to most sources, becoming legacy to EAP-FAST)
  • EAP-TLS – Extensible Authentication Protocol – Transport Layer Security (A public key system for wireless lans using a RADIUS server)
  • PEAP – Protected Extensible Authentication Protocol – “PEAP is similar in design to EAP-TTLS, requiring only a server-side PKI certificate to create a secure TLS tunnel to protect user authentication
  • RADIUS – Remote Authentication Dial In User Service
  • 802.11 – (a,b,g,n) IEEE standardized wireless protocols 😀
  • 802.16 – IEEE standardize WiMAX [Worldwide Interoperability for Microwave Access] family.

So, to start with there is a bag full of acronyms which are all interlinked.

There seem to be a few fundamental problems when securing wireless networks:

  1. Devices connecting may have low computational power, ie: smart phones. (This is relative to desktops and servers so will most likely always be the case)
  2. Incoming and outgoing packets are broadcasted thus easy to intercept
  3. Users can be moving to between access points
  4. Performance requirements are high, people expect wireless connections not to be slow than wired connections

These points combined force the situation of weaker security.

The detail of the lecture was in covering the different forms of handshakes and authentication that are floating around at the moment… and all of their flaws. It will take a fair bit of time to really become familiar with these.

I get the feeling that wireless security is always going to be an issue simply because of the computing power mismatch between mobile and fixed devices in addition to the broadcast nature of the communications. The advancement over the past 5 years does however show that the band-aid approach is sufficient to facilitate most of the world adopting wireless networks.

WiMAX - The way of the future!
IT Research Methods

FIT5185 – IT Research Methods Week 7

A short week for IT research methods in terms of new material. Due to the literature review presentations we did not have a tutorial and only half a lecture. The topic of the lecture was ‘Correlation Analysis’, presented by Joze Kuzic.

Lets start with the simple definition of correlation analysis, ‘A statistical investigation of the relationship between one factor and one or more other factors’.

One point that I need reminding on was correlation vs regression (source:

Correlation – both variables are random variables, and 2) the end goal is simply to find a number that expresses the relation between the variables
Regression – one of the variables is a fixed variable, and 2) the end goal is use the measure of relation to predict values of the random variable based on values of the fixed variable

The topic of causality and correlation was approached quite carefully in the lecture notes citing that correlation can be used to look for causality but does not infer causality.

Methods of correlations:

Pearson’s correlation coefficient – for parametric (randomized, normally distributed data).

Spearman rank order correlation coefficient – for non-parametric data, [-1.0 , 1.0]

Significance of correlations was the next logical point covered, not much mathematical reasoning was covered apart from p < 0.05 is good :).


Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 6

This week will look at what to me seems like a less well known form of DoS attack, DNS poisoning. This attack is more dangerous than the others we have looked at before because it can not only prevent users from accessing a services, it can lead them to a fake version of the service and ask for sensitive information. There are many methods for this attack, such as grabbing packet in a MITM attack and altering them. An example of this method which can be executed over wireless networks can be seen in the video below:

 As mentioned in the video, the process of reading packets, checking for a specific field, editing it and re-injecting the packet requires ‘a half-way decent computer’.

The DNS cache poisoning attack, first released by Dan Kaminsky, actually poisons the source of the IP addresses the target computer is looking up. This way no real time injection or modification is required and a whole subnet can be attacked through their DNS server. Bind, the most common

The patch included in BIND 9.4.2 provided defense by randomization of listening port.

However, this is only a partial fix, Liu warned. “Port randomization mitigates the problem but it doesn’t make an attack impossible,” he said. “It is really just a stopgap on the way to cryptographic checking, which is what the DNSSEC security extensions do.

An example of DNS server cache poisoning effective prior to the port randomization patch can be seen below:

 So why did this vulnerability come about?

As with many aspects of the internet, convenience rather than security was the priority. The internet could function with just the root name servers that store IP addresses entered by an administrator. Every time a user wanted to view a site or service associated with that domain name they could ask the root nameserver to send the relevant address. This would however mean a great deal more DNS traffic clogging up networks and causing bottle necks at the authoritative name servers. So, we create name servers that are below the authoritative name servers, they store the IP addresses from the first time they are asked to check them, until the expiry (TTL) value of the records they retrieved. Kaminsky’s exploit waited for a target DNS server to re-check the IP addresses for a domain name, sending a falsified response to the name server.

Kaminsky’s blog post on the vulnerability can be found here:

Advanced Network Security

FIT5037 – Advanced Network Security Week 6

Week 6 completed the lecture on security in distributed programming. Dr. Le provided a summary of the key advantages associated with modern solutions provided by Java and CORBA. Given the wide variety of options and applications there is unfortunately no standard solution. Considering the large workload already provided by the subjects 3 assignments I have had little time to further investigate the alternatives.

I was having a look at some youtube videos to get a better feel for the key issues in this topic. A good one was from a GoogleTechTalk (see the channel: