Categories
IT Research Methods

FIT5185 – IT Research Methods Week 6

Week 6 began statistical analysis using SPSS, specifically for non-parametric tests. Non-parametric data can be described as data that does not conform to normal distribution. A simple example is ranked data such as movie reviews (0 – 5 stars). A major limitation of non-parametric data is the increased sample size required to gain sufficient significance to reject a null hypothesis.

A good summary of the assorted types of non-parametric tests was found at http://www.graphpad.com/www/book/choose.htm:

Type of Data
Goal Measurement (from Gaussian Population) Rank, Score, or Measurement (from Non- Gaussian Population) Binomial
(Two Possible Outcomes)
Survival Time
Describe one group Mean, SD Median, interquartile range Proportion Kaplan Meier survival curve
Compare one group to a hypothetical value One-sample t test Wilcoxon test Chi-square
or
Binomial test **
Compare two unpaired groups Unpaired t test Mann-Whitney test Fisher’s test
(chi-square for large samples)
Log-rank test or Mantel-Haenszel*
Compare two paired groups Paired t test Wilcoxon test McNemar’s test Conditional proportional hazards regression*
Compare three or more unmatched groups One-way ANOVA Kruskal-Wallis test Chi-square test Cox proportional hazard regression**
Compare three or more matched groups Repeated-measures ANOVA Friedman test Cochrane Q** Conditional proportional hazards regression**
Quantify association between two variables Pearson correlation Spearman correlation Contingency coefficients**
Predict value from another measured variable Simple linear regression
or
Nonlinear regression
Nonparametric regression** Simple logistic regression* Cox proportional hazard regression*
Predict value from several measured or binomial variables Multiple linear regression*
or
Multiple nonlinear regression**
Multiple logistic regression* Cox proportional hazard regression*

All of the tests described in the table above can be applied via SPSS. Note that “Gaussian population” refers to normally distributed data. Not featured in the table above is the sign test, perhaps as it is described as lacking statistical power of paired t-tests or the Wilcoxon test.

One question that immediately comes to mind is how the process of normalization can be applied to force comparison of normally distributed data to non-parameter data.

The lecture went on to describe important assumptions and the rationale behind several test methods. I will await further practical testing with SPSS before going into more detail on them.

Categories
Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 5

Distributed Denial of Service attacks are becoming and increasingly common phenomenon with both Gov’t agencies, activists, individuals and business entities using the attack as a tool to further their goals. Evidence of this can be seen in the list below:

Along with the increasing occurrence of DDoS attacks, the power of such attacks is also increasing. Studies conducted in 2002 and again in 2009 showed an increase in the average size of large attacks from 400 Mbps to 49 Gbps. One might argue that this increase would be matched by target networks ability to handle bandwidth, however the study compared the attack from 2002 to be 1 fifth of Harvard’s network capability to 25 times Harvard in 2009. Additionally the paper noted that a 400 Mbps DDoS attack will still cause many networks to crash. The paper used in sourcing for these points is specific to Human Rights sites (a common target for DDoS attacks) and was compiled by Suckerman, E., Roberts, H., McGrady, R., York, J., Palfrey, J., 2010. A link to the article:  click here

Organized activist groups, particularly Anonymous have launched serveral well publicized DDoS attacks in the past 12 months particularly, Operation Payback in relation to companies boycotting WikiLeaks.

Despite the rise in DDoS attacks, three out of ten web hosting providers reported having no dedicated security staff. –  Danny McPherson et al., “Worldwide Infrastructure Security Report: Volume V, 2009 Report,” Arbor Networks, January 19,  2010, http://staging.arbornetworks.com/dmdocuments/ISR2009_EN.pdf.

Methods

A 2009 study identified a shift away from purley bandwidth based attack. – Danny McPherson et al., “Worldwide Infrastructure Security Report: Volume V, 2009 Report.” Additionally, most major network operators reported that DDoS attacks were usually mitigated within 1 hour, much of which came from the ability to call on upstream peers to disconnect attacking sub-nets.

DDoS attacks can be catagorized into:

Application attacks: Use software vulnerabilities to exhaust system resources.

Network Attack: saturate communication lines to the target.

Arbor’s 2009 report states that 45% of DDoS attacks were network attacks and 49% were application attacks.

Botnets and amplifiers are two key components of DDoS attacks. Botnets assist in braodening the range of IP address the attack is coming from, reducing detection and increasing collateral damange in mitigation. A botnet of several hundred thousand computer is not however sufficient to generate 49 Gbps of bandwidth. To up the bandwidth, amplifiers are used. An example of amplification is an attacker sending DNS requests to a DNS server with the source IP address of the target. The packet send to the DNS server by the attack is 1 / 76 the size of the packet send to the target. We can see that the attack has been sgnificantly amplified.

In essence, DDoS attackers use the distributing effect of a botnet in association with resource leverage such as DNS amplification to increase the potency of their attacks.

DNS amplification attack, source: 10networks.com

On a “normal” day, Arbor detects roughly 1300 DDoS attacks. – Arbor Networks, “Atlas Summary Report: Global Denial of Service,” accessed October 26, 2010,
http://atlas.arbor.net/summary/dos

Mitigation

The balance between reducing malicious traffic and service availability to genuine users is very difficult to effectivley maintain. The challenge for all network admins should be to keep this ratio as high as possible. Some simple mitigation methods are listed below, a more expansive review will be conducted in the next post. The legality and lack of collaboration between contries and companies is another key point needed for discussion in a wholistic mitigation strategy.

  • Avoiding ‘edge’ ISPs, ie: tier 3, small/inhouse hosting companies
  • Replacement of CMS sites withe static HTML content.
  • Adding aggressive caching
  • Use of DDoS resistent servers (ie: blogger cloud, EC2 cloud) or atleast have these servers as a backup
  • Clear communication and understanding of ISP SLAs.
Categories
Advanced Network Security

FIT5037 – Advanced Network Security Week 5

Week 5 saw an introduction to security  programming distributed applications. As I have very little experience in distributed programming it was difficult to understand everything covered in the lecture. The first question posed was, when developing a distributed program, which of the following is best for secure distributed programs:

Next came a discussion over the strengths and weaknesses of stateless and stateful servers.
The risk associated with multithread/process methods to deal with load became quite detailed. Analysis moved into the vulnerabilities of shared memory in operating systems, the most prominent being buffer overflows.

One of the key issues with using complex third party libraries is lack of confidence in the code. Many components in a distributed system will be written in C/C++ likely leading to vulnerabilities. We spent some to reading code to look for vulnerabilities, it seems that this will be an imperative skill for anyone pursuing a career in network security. Vulnerabilities in code range from buffer overflows, lack of sanitation allowing for injections, forced deadlocks and sharing of information between processes (ie: XSS).

Categories
IT Research Methods

FIT5185 – IT Research Methods Week 5

The topic of week 5’s lecture presented by David Arnott was ‘Communicating Research’. After establishing why it is important to publish research, we cover the paper publication process in some detail.

The first step discussed was the research proposal, aimed at the target audience of supervisors/scholarship committee/confirmation panel. In regards to tense it was advised to write in past tense with the exception of results discussion which would be written in present tense. Proof reading and polishing were highlighted as a key characteristic of successful paper.

Referencing came next, including introduction to the author date and numbered referencing.

Planning on both a paper level and a macro level for a research career where highlighted by David as a key factor for success.

researchprocess
The research publication process
Categories
Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 4

Continuing on with the deeper analysis of each attack method, this post will review the Low-rate DoS attack. The key paper I will be using  as a reference for this review will be:

RRED: Robust RED Algorithm to Counter Low-Rate Denial-of-Service Attacks, 2010, Zhang, C., Yin, J., Cai, Z., and Chen, W., IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 5, MAY 2010.

Another key resource is this site, tracking recent Low-rate DoS attacks: http://sites.google.com/site/cwzhangres/home/posts/recentpublicationsinlow-ratedosattacks

A presentation by A. Kuzmanovic and E. W. Knightly, 2003 (http://www.cs.northwestern.edu/~akuzma/rice/doc/shrew.ppt) is heavily borrow from.

Starting with a simple definition, Low-rate DoS attacks differ from flood type attacks in that packet transmission is  limited. The TCP timeout mechanism is instead exploited to increase the ratio of attacker resources to target resources consumed. This reduced packet transmission also serves to make the attack method much more difficult to identify. Low-rate DoS attacks are also known as:

Two important variables in the TCP congestion avoidance mechanism are:

  • Retransmission time-out [RTO]
  • Round Trip Time Estimate [RTT]

Logically the RTO must be less than the RTT to avoid unnecessary retransmission. In fact RTO=S(smoothed)RTT+4*RTTVAR.

At this point it is important look more closely at how the TCP congestion avoidance algorithm works:

  1. A ‘congestion window’ is maintained, limiting the number of packet that have not been acknowledge by the receiver, packets in transit.
  2. When TCP connections are initialized or after dropped packet TCP enforces a ‘slow start’. The slow start mechanism starts the ‘congestion window’ small and then increases it exponentially with each acknowledged packet. This makes sense, as the TCP connection demonstrates its stability we can increase throughput.
shrewAttack
Shrew attack pulses packets based on minRTO, causing TCP follows its lead. source: http://www.cs.northwestern.edu/~akuzma/rice/doc/shrew.ppt

The testing run by A. Kuzmanovic and E. W. Knightly demonstrated that shrew attacks can reduce a targets TCP throughput to a fraction of normal operation. Achieved with a relatively low number of malicious throughput… ” 87.8% throughput loss without detection“.

The Low-rate DoS attack exploits the standardization of the TCP protocol. Many protocols used on the internet are standardize (ie: HTTP, IP, etc) , they need to be standardized for communications to work. This does however present attackers with a target they know will be present on systems everywhere.

Detection and Mitigation

A. Kuzmanovic and E. W. Knightly analyze minRTO randomization and find this to be effective at the cost of general TCP performance. They also highlight that the different TCP congestion avoidance algorithm versions result in significantly different PDoS effectiveness.

Zhang et. al., propose a Robust Random Early Detection [RRED] algorithm, identifying malicious TCP packets by the time frame in which they are resent after a timeout.

RRED
RRED Pseudo code algorithm

I will aim to do some testing using snort or even dynamic iptables rules to allow for effective detection and mitigation of shrew attacks.

Categories
Advanced Network Security

FIT5037 – Advanced Network Security Week 4

After a review of some of the previous weeks discussion on ECC week 4’s lecture focused on Intrusion Detection Systems [IDS]. The initial slide of the lecture featured a great summary of IDS:

Intrusion Detection System
Intrusion Detection System- source week 4 lecture notes

The concepts behind IDSs are not overly complicated; analyse incoming traffic, compare it to known bad traffic and take action accordingly. Unfortunately implementation of such a system is not so simple, some of the primary difficulties are:

  • To what extent can we generalize on bad.malicious traffic recognition?
  • How much time/computational resources can be spent on each incoming packet?
  • How can knowledge base and analysis engines communicate in real-time without slowing the network?
  • How can definitions/knowledge bases keep up with new exploits?

To help deal with these difficulties IDS systems are modularized into:

  • Host Based IDS [HIDS] – Examines all packets flowing through a network (ie: Tripwire, AIDE)
  • Network Based IDS [NIDS] – Examines process activity on a system, identifying malicious process behavior

Snort, the IDS we have been experimenting with in labs, was introduced in the lecture as an example of a NIDS. It strengths were identified as being an open-source option the is extremely fast and lightweight in comparison to it’s competition.

The rest of the lecture discussed how snort rules work and how to write them. A detailed version can be found in chapter 3 of: http://www.snort.org/assets/166/snort_manual.pdf

Categories
IT Research Methods

FIT5185 – IT Research Methods Week 4

IT research method’s fourth week was presented by Joze Kuzic providing a detailed introduction to surveys (or ‘super looks’ as the translation demands). First off we clarified that surveys are not limited to forms that managers and students need to fill out! There are many types of surveys,  ie:

  • Statistical
  • Geographic
  • Earth Sciences
  • Construction
  • Deviation
  • Archaeological
  • Astronomical
These are just a few types of non-form surveys. So with this broader view we can see that most anyone conducting research will need to have a good understanding of how to create effective surveys. Interviews were listed as a method for conducting surveys although I imagine this would in most cases be quite dubious if used alone. Anonymous surveys appear to be the most common form of surveys for people.
After discussing some of the obvious pros and cons of mail surveys, the lecture moved into population sampling.
Considering sample sizes – source week 4 lecture notes
Likert scales where subsequently introduced along with nominal , interval and ration frames for question responses.
Finally the format of surveys was raised, specifically the demonstrated effect format has on results.
The test for week 5 on this subject will be on experiments and surveys.
Categories
Advanced Network Security

FIT5037 – Advanced Network Security Week 3

Week 3 of network security continued our introduction to Elliptic Curve cryptology. Specifically the mathematical operations and rationale behind this public key encryption method. At the moment I am implementing the RSA requirements for assignment 1 so did not get a chance to do much practical experiment with ECC. For me, understanding how the algorithms work can only be achieved by implementing them.

The lecture began with a definition of the Discrete Logarithm Problem [DLP]. Put simply:

Given a group of elements [a,B]
Find the integer such that B = a ^ x

In this scenario it is relatively easy to compute B. However, given a and B, computing x is computationally expensive.

The operation of log(B,base a) to find x is not dissimilar in computational complexity to finding p and q given n (n = pq). Note that the logarithmic function is only particularly expensive in a discrete domain.

An example of an elliptic curve function

Moving from a definition of elliptic curves we related this to encryption.

Given an elliptic curve function and and infinite point O a set G can be established:

Take two points, P and Q and the intersect of the line PQ, is R -> P + Q = R (remembering these are co-ordinates).

For every P, P + (-P), a tangent on point P will intersect with -(R).

ECC operation definitions:

P + Q -> (-Xr) = s^2 – Xp – Xq, -(Yr) = s(Xp – Xr) – Yp

where s = (Yp – Yq) / (Xp – X q)

P + P (2P) -> (-Xr) = s^2 – 2Xp, Yr = s(Xp – Xr) – Yp

I am going to begin using the Python Library, Sage (http://www.sagemath.org/) to test these operations and hopefully get a graphical representation. Java also has an elliptic curve library (http://download.oracle.com/javase/1,5.0/docs/api/java/security/spec/EllipticCurve.html). I don’t have a good understanding as yet of how these operations fit into the elliptic curve cryptology algorithm.

Of the two common elliptic curve families, Binary and Prime number curves, I will be focusing on Prime number curves as it is most relevant to our assignment requirements, and hopefully the most understandable.

As the field needs to be discrete, we defined a group (Zp, mod) = {0,1, p -1} where p is a prime number.

The elliptic field will be defined as y^2 = x^3 +ax + b mod p where a, b, y and x are all members of Zp.

Example:

p=11, Zp=Z(11) – > y^2 = x^3 + x + 6 (mod 11)

E (Z11, mod) = {(2,4),(2,7), (3,5),(3,6), (5,2),(5,9), (7,2),(7,9), (8,3),(8,8), (10,2),(10,9)}

The next step is to select a generate, say g = (2,7).

Using the operation defined above for P + P we can calculate a set of G, 2G ….nG:

g=(2,7), 2g=(5,2), 3g=(8,3), 4g=(10,2) 5g=(3,6), 6g=(7,9), 7g=(7,2), 8g=(3,5), 9g=(10,9), 10g=(8,8),11g=(5,9),12g=(2,4)

Now, both parties know the elliptic curve and the generator g (2,7) -each party (lets say Alice and Bob) must now create a public key.

Alice generates a random number, say 2. Her public key becomes 2g (see the set above) -> (5, 2).

Bob also has a public key, random number say 3. His public key becomes 3g -> (8,3).

Alice wants to send the encrypted message -> (3,6)

Here is a major difference to the RSA algorithm. Instead of only using Bob’s public key to encrypt a message, Alice must use both Bo and her own public key.

So, to encrypt the message (3,6) for transmission to Bob, Alice must complete the following operation:

Cypher = (AlicePubKey(5,2), AliceRandomNubmber(4) *BobPublicKey(8,3) + m(3,6))

= ((5,2), 4(8,3) + (3,6) => (5,2),( (8,3) + (8,3) +(8,3) +(8,3) + (3,6)

See the operation definitions in bold above for how to calculate the point additions.

Cypher ready for transmission from Alice to Bob = ((5,2), (5,9))

Now, Bob receives the cypher text and must decrypt using the elliptic curve, AlicePublicKey(5,2) and his Random(3).

The operation is:

(Cypher excl. AlicePubKey) – (AlicePubKey * Bob’sRandom)

= (5,9) – ((5,2) + (5,2) + (5,2)) => (5,9) – (7,9)

Again from the operations above P + Q is defined so lets turn P -Q -> (5,9) – (7,9) into P + Q -> (5,9) + (7, -9).

Which will output the message – (3,6)!

So, we can see that encryption and decryption is not that difficult in terms of operations. With that in mind how can we be sure that if we are transmitting our the elliptic curve, the generator and our publickey, an attacker can’t find our RandomNumber (which is in fact the private key).

The attacker will know:

Alices Public Key was found by taking the set generated using the Elliptice curve and generator (2, 7).

Her public key (Q) can be defined as -> Q = kP -> where k is here secret random number and P is the generator (2,7).

Finding k given Q and P is the equivalent of a Discrete Logarithm problem which as mentioned is computationally expensive.

The safety of Alice’s secret random is source in the Elliptic Curve Logarithm Problem presented above.
For an elliptic curve modeling tool http://www.certicom.com/ecc_tutorial/ecc_javaCurve.html

 

Categories
Reading Unit - DoS Research

FIT5108 – DoS Reading Unit Part 3

This week I will start a detailed review of each of the attack methods introduced in Week 1’s post. I will start with on of the oldest DoS attacks, the Ping of Death.

I incorrectly listed this under ICMP attacks in a previous post, the ping of death actually exploits the process of IP packet reassembly.

The disassembly and reassembly process of data communication

We can see above that after being received via the communication medium (ie: cat6 cable), the ethernet packets are unwrapped and we find an IP packets. The maximum size of an IP packet according to the standard specification (http://tools.ietf.org/html/rfc791) is 65,535 bytes. The maximum size of a standard ethernet frame (http://standards.ieee.org/about/get/802/802.3.html) is 1500 bytes. So this means that IP packets must be split across multiple Ethernet frames and the receiver must reassemble them. To keep track of reassembly the IP fragments have an fragment offset field.

The fragment offset says, “I start with the 1000th byte of the complete IP packet, put me after the 999th byte. Now considering the fact that the ethenet protocol allows frames of upto 1500bytes, the IP protocol would not allow an IP fragment to say I am the 65,000th byte put me there. As above the maximum IP packet is 65,535 bytes. However, the IP protocol actually allows an IP fragment to say I am the 65,528th byte!

So, if an attacker send an IP packet that was the allowable size of  65,535 bytes, it will be broken up into Ethernet frames (Ethernet is the most common Datalink protocol). A ping of death occurs when the attacker modifies the the last IP fragment to I am the 65,528th byte but add more that 8 bytes of subsequent data. The receiver will now try to reassemble an IP packet that exceeds 65,535 byte limit.

Due to the fact that data communications and packet assembly must be very fast in older operating systems there were no checks done to ensure the reassembled IP packet did not exceed the memory allocated for it. This would result in a buffer overflow and the crash or bugging of the system.

On any post 1998 systems a check is completed to ensure the sum of Fragment Offset and Total Length field on an IP fragment do not exceed 65,535 bytes. This is obviously an old, now mostly non-exploitable attack but it is worth reviewing to see the type of exploits that have existed in the past as they will provide some insight into future vulnerabilities.

A program written in C by Bill Fenner implementing a ping of death using ICMP can be found here: http://insecure.org/sploits/ping-o-death.html.

Any program implementing a ping of death attack must be able to inject modified packets/frames to a network interface. This is also required in a number of other DoS attacks so I will look at doing a basic script in Python using the PyCap Library: http://pycap.sourceforge.net/. Although it does require Python 2.3 :(.

Categories
IT Research Methods

FIT5185 – IT Research Methods Week 3

Experiments was the topic of week 3’s lecture presented by David Arnott. We started with a classification of scientific investigation:

  • Descriptive studies
  • Correlation studies
  • Experiments

Importantly the anchor of these investigations is the research question.

Terms and concepts was the next sub-section:

  •  Subject (Participant by law in Aus where people are subjects) – The target of your experimentation
  • Variables (Independent variables, Dependent variables, Intermediate variables, Extraneous variables), these are self explanatory via dictionary definitions.
  • Variance/Factor  models – Aims to predict outcome from adjustment of predictor (independent?) variables, in an atomic time frame. That is my loose interpretation.
  • Process model -Aims to explain how outcomes develop over time (The difference between variance and process models appears to be moot and I feel somewhat irrelevant).
  • Groups -> experimentation group, control group -> ensuring group equivalence.
  • Hypothesis – Prediction about the effect of independent variable manipulation on dependent variables. One tailed, two tailed,  null hypothesis.
  • Significance – the difference between two descriptive statistics, to an extend which cannot be chance.
  • Reliability – Can the research method be replicated by another researcher
  • Internal Validity – How much is the manipulation of the independent variable responsible for the results in the dependent variable.
  • External validity – Can the results be generalized to entities outside of the experiment
  • Construct validity – extend to which the measures used in the experiment actually measure the construct?

Experimental Design followed:

  • Between-subject design vs Within-subject design -> are subjects manipulated in the same or differing ways.
  • After-only vs Before-after design -> testing of dependent variables at which stages..
  • Statistical tests must reflect the experimental design:

 

Statistical test to reflect the experimental design - Source week 3 lecture notes

When creating an experimental design it seems like a good idea just to make a check list.

The coffee/caffeine example covered next seemed a bit odd as it made the assumption that coffee caffeine are the same things. I recall same type assumption was made in regards to THC and marijuana which was later found to be fundamentally flawed. I did not understand the Decision support system example at all so was not really able to extrapolate much understanding from the two examples covered.