This week’ summary will be a review of 2 papers from my reading pack: http://mchost/sourcecode/DoS/DoS%20Docs/JournalandBook/

Adaptive Defense Against Various Network Attacks, 2006, Zou, C., Duffield, N., Townsley, D., Gong W., IEEE.


The method discussed in the paper was not focused on improving the current malicious packet identification methods but to increase the efficiency of their application by modifying their adjustable parameters based on the current and recent network conditions. One example was drawn via the Hop Count Filtering [HFC] method for mitigation of DDoS attacks through the assumption that attackers do not know the real hop-length from spoofed sources to their target. The effectiveness of this particular mitigation method is not paramount to the contention but rather the fact the HCF has adjustable parameters in its filtration. By adjusting the ‘strictness’ of the HCF using a simple, low overhead, low computational cost method, the authors were able to significantly improve the performance of the HCF.

Note that performance was based on a curve whereby the costs of false positive and false negatives were arbitrarily defined.

Relevance to thoughts in intelligent systems in network security and DDoS mitigation:

  • Computational cost is almost always relevant
  • Network overhead is always relevant
  • The utility, or cost/reward heuristic for the adaptive system must be provided to the system
  • Parameter management of multiple non-adaptive mitigation or defense systems can be done by single adaptive service which monitor network conditions and established the probability a current attack and the severity.
  • The proposed systems does not use any intelligent systems in the actual identification of malicious packets, perhaps this is due to the computational cost.
  • Adaptive systems can be used to achieve cost minimization for security services.

A Distributed Throttling Approach for Handling High Bandwidth Aggregate, 2007, Wei Tan, C., Chiu, D-M., Lui, J., Yau, D., IEEE.


This article approach the breakdown of network communication in the case of flash crowds and DDoS attack which both cause high network aggregates sourced from distributed source to a single location. The authors propose what I would describe as a layered router throttling approach. Throughout the article the term ‘dropped traffic’ is used as to describe the effect of router throttling. The article provides some background on the router throttling strategy but I am somewhat confused over the dropping of traffic ones a certain bandwidth level is exceeded. Does this mean that all incoming packets will be dropped regardless of the existence of tcp session? Does it means that existing sessions will remain alive until they time out? I will need to do some further reading on the router throttling mitigation method. A key requirement of this strategy is having a number of routers in the preceding network hops subscribed to the method. See below:

Drawn from the paper, this is a deisrible router structure

The paper goes on to propose a number of algorithms and lightweight communication between routers and evidence that by dropping traffic the distributed throttling method can keep target servers alive. Although this solution would undeniably be effective in keeping a server alive, it drops traffic based on the traffic level of the router it approaches. I feel that there is a very bad worst case scenario where the probability of packet being dropped would have a very low correlation to whether or not it is malicious. The lack of header/packet inspection does have very good computational efficiency however.

  • This solution could be consider somewhat of a benchmark that intelligent mitigation methods would need to improve on, the indiscriminate dropping of packets will result in DoS for users approaching the server via routes that the DDoS approach. Keeping a server alive is probably the primary goal of DoS mitigation but service availability should stand right next to that goal.
  • If this defense strategy was widespread it appears to have numerour vulnerabilities that attackers would sureley exploit. Attackers could test thrasholds for tripping packet dropping and possibly launch attacks that deny serverhere is a  to specific regions with less cost than an attack on a service that was not protected by distributed router throttling.
  • I get a sense that this strategy could work on a macro sense perhaps piggy backing on border routing protocols, however the ‘dumb’ nature of throttling seems a very limiting factor. I will obvously need to investigate the router throttling methods more as with my current understanding this solution seems sub-optimal.