OWASP Top 10 using AWS WAF Service

We have a web application that has been running on AWS for several years. As application load balancers and the AWS WAF service was not available, we utilised and external classic ELB point to a pool of EC2 instances running mod_security as our WAF solution. Mod_security was using the OWASP Mod_security core rule set.

Now that Application Load Balancers and AWS WAFs are available, we would like to remove the CPU bottleneck which stems from using EC2 instances with mod security as the current WAF.


Step 1 – Base-lining performance with EC2 WAF solution.

The baseline was completed using https://app.loadimpact.com where we ran 1000 concurrent users, with immediate rampup. On our test with 2 x m5.large EC2 instances as the WAF, the WAFs became CPU pinned within 2mins 30 seconds.

This test was repeated with the EC2 WAFs removed from the chain and we averaged 61ms across the loadimpact test with 1000 users. So – now we need to implement the AWS WAF solution so that can be compared.


Step 2 – Create an ‘equivalent’ rule-set and start using AWS WAF service.

We used terraform for this environment so the CloudFormation web ACL and rules are not being used and I will start be testing out the terraform code upload by traveloka. After having a look at the code in more detail I decided I need to get a better understanding of the terraform modules (and the AWS service) so I will write some terraform code from scratch.

So – getting started with the AWS WAF documentation we read, ‘define your conditions, combine your conditions into rules, and combine the rules into a web ACL.

  • Conditions: Request strings, source IPs, Country/Geo location of request IP, Length of specified parts of the requests, SQL code (SQL injection), header values (i.e.: User-Agent). Conditions can be multiple values and regex.
  • Rules: Combinations of conditions along with an ACTION (allow/block/count). There are Regular rules whereby conditions can be and/or chained. Rate-based rules where by the addition of a rate-based condition can be added.
  • Web ACLs: Whereby the action for rules are defined. Multiple rules can have the same action, thus be grouped in the same ACL. The WAF uses Web ACLs to assess requests against rules in the order which the rules are added to the ACL, whichever/if any rules is matched first defines which action is taken.

Starting simple: To get started I will implement a rate limiting rule which limits 5 requests per minute to our login page from a specified IP along with the basic OWASP rules from terraform code upload by traveloka . Below is our main.tf with the aws_waf_owasp_top_10_rules created for this test.


Step 3 – Validate functions of AWS WAF

To confirm blocking based on the rate limiting rule I am using Apache’s Benchmarking tool, ab.

ab -v 3 -n 2000 -c 100 https://am.ameu.sonet.com.au/login  > ab_2000_100_waf_test.log

This command logs request headers (-v 3 for verbosity of output), makes 2000 requests (-n 2000) and conducts those request 100 concurrently (-c 100). I can then see failed requests by tailing the output:

tail -f ./ab_2000_100_waf_test.log  | grep -i response

All looks good for the rate limiting based blocking, though it appears that blocking does not occur are exactly 2000 requests in the 5 minute period. It also appears that there is a significant (5-10min) delay on metrics coming through to the WAF stats in the AWS console.

AWS console about 10 mins after running the HTTP AB tool we can see blocks

The blocks are HTTP 403 responses from the ELB:

WARNING: Response code not 2xx (403)
LOG: header received:
HTTP/1.1 403 Forbidden
Server: awselb/2.0
Date: Mon, 01 Jul 2019 22:39:11 GMT
Content-Type: text/html
Content-Length: 134
Connection: close

After success on the rate limiting rule, the OWASP Top 10 mitigation rules need to be tested. I will use Owasp Zap to generate some malicious traffic and see when happen!

So it works – which is good, but I am not really confident about the effectiveness of the OWASP rules (as implemented on the AWS WAF). For now, they will do… but some tuning will probably be desirable as all of the requests OWASP ZAP made contained (clearly) malicious content but only 7% (53 / 755) of the requests were blocked by the WAF service. It will be interesting to see if there are false positives (valid requests that are blocked) when I conduct step 4, performance testing.


Step 4 – Conduct performance test using AWS WAF service, and

Conducting a load test with https://app.loadimpact.com demonstrated that the AWS WAF service is highly unlikely to become a bottleneck (though this may differ for other applications and implementations).


Step 5 – Migrate PROD to the AWS WAF service.

Our environment is fully ‘terraformed’, implementing the AWS WAF service as part of our terraform code was working within an hour or so (which is good time for me!).


Next Steps

Security Automatons: https://aws.amazon.com/solutions/aws-waf-security-automations/, is this easy to do with Terraform? https://github.com/awslabs/aws-waf-security-automations has:

  • waf-reactive-blacklist
  • waf-bad-bot-blocking
  • waf-block-bad-behaving
  • waf-reputation-lists

AWS RDS (Oracle 12c) Offsite Backups

A lot of people need to do offsite backups for AWS RDS – which can be done trivially within AWS. If you need offsite backups to protect you against things like AWS account breach or AWS specific issues – offsite backups must include diversification of suppliers.

I am going to use Amazon’s Data Migration service to replicate AWS RDS data to a VM running in Azure and set up snapshots/backups of the Azure hosts.

The new (2018) AWS Data Migration Service solve offisite RDS backup problems

The steps I used to do this are:

  1. Set up an Azure Windows 2016 VM
  2. Create an IPSec tunnel between the Azure Windows 2016 VM and my AWS Native VPN
  3. Install matching version of Oracle on the Windows 2016 VM
  4. Configure Data Migration service
  5. Create a data migration and continuous replication task
  6. Snapshots/Backups and Monitoring
  7. Debug and Gotchyas

1,2 – Set up Azure Windows 2016 VM and IPSec tunnel

Create Network on Azure and place a VM in the network with 2 interfaces. One interface must have an public IP, call this one ‘external’ and the other inteface will be called ‘internal’ – Once you have the public IP address of your Windows 2016 VM, create a ‘Customer Gateway’ in your AWS VPC pointing to that IP. You will also need a ‘Virual Private Gateway’ configured for that VPC. Then create a ‘Site-to-Site VPN connection’ in your VPC (it won’t connect for now but create it anyway). Configure your Azure Win 2016 VM to make an IPSec tunnel by following these instructions (The instructions are for 2012 R2 but the only tiny difference is some menu items):
https://docs.aws.amazon.com/vpc/latest/adminguide/customer-gateway-windows-2012.html#cgw-win2012-download-config. Once this is completed both your AWS site-to-site connection and your Azure VM are trying to connect to each other. Ensure that the Azure VM has its security groups configured to allow your AWS site-to-site vpn to get to the Azure VM (I am not sure which ports and protocols specifically, I just white-listed all traffic from the two AWS tunnel end points. Once this is done it took around 5 mins for the tunnel to come up (I was checking the status via the AWS Console), I also found that it requires traffic to be flowing over the link, so I was running a ping -t <aws_internal_ip> from my Azure VM. Also note that you will need to add routes to your applicable AWS route tables and update AWS security groups for the Azure subnet as required.

3 – Install matching version of Oracle on the Windows 2016 VM

4,5 – Configure Data Migration service and migration/replication

Log into your AWS console and go to ‘Data Migration Service’ / ‘DMS’ and hit get started. You will need to set up a replication VM (well atleast pick a size, security group, type etc). Note that the security group that you add the replication host to must have access to both your RDS and your Azure DBs – I could not pick which subnet the host went into so I had to add routes for a couple more subnets that expected. Next you will need to add your source and target databases. When you add in the details and hit test the wizard will confirm connectivity to both databases. I ran into issue on both of these points because of not adding the correct security groups, the windows firewall on the Azure VM and my VPN link dropping due to no traffic (I am still investigating a fix better than ping -t for this). Next you will be creating a migration/replication task, if you are going to be doing ongoing replication you need to run the following on your Oracle RDS db:

  • exec rdsadmin.rdsadmin_util.set_configuration(‘archivelog retention hours’, 24);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘ADD’,’ALL’);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘DROP’,’PRIMARY KEY’);

You can filter by schema, which should provide you with a drop down box to select which schema/s. Ensure that you enable logging on the migration/replication task (if you get errors, which I did the first couple of attempts, you won’t be fixing anything without the logs.

6 – Snapshots and Monitoring

For my requirements, daily snapshots/backups of the Azure VM will provide sufficient coverage. The Backup vault must be upgraded to v2 if you are using a Standrd SSD disk on the Azure VM, see:
https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade . To enable email notifications for Azure backups, go to the azure portal, select the applicable vault, click on ‘view alerts’ -> ‘Configure notifications’ -> enter an email address and check ‘critical’ (or what type of email notifications you want. Other recommended monitoring checks include: ping for VPN connectivity, status check of DMS task (using aws cli), SQL query on destination database confirming latest timestamp of a table that should have regular updates.

7 – Debug and Gotchyas

  • Azure security group allowing AWS vpn tunnel endpoint to Azure VM
  • Windows firewall rule on VM allowing Oracle traffic (default port 1521) from AWS RDS private subnet
  • Route tables on AWS subnets to route traffic to your Azure subnet via the Virtual Private Network
  • Security groups on AWS to allow traffic from Azure subnet
  • Stability of the AWS <–> Azure VM site-to-site tunnel requires constant traffic
  • The DMS replication host seems to go into an arbitrary subnet of your VPC (there probably some default setting I didn’t see) but check this and ensure it has routes for the Azure site-to-site
  • Ensure the RDS Oracle database has the archive log retention and supplemental logs settings as per steps 4,5.
  • Azure backup job fails with ‘Currently Azure Backup does not support Standard SSD disks’. – upgrade backup vault: https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade

Transitioning from standard CA to LetEncrypt!

With the go-live of https://letsencrypt.org/ its time to transition from the pricy and manual standard SSL cert issuing model to a fully automated process using the ACME protocol. Most orgs have numerous usages of CA purchased certs, this post will cover hosts running apache/nginx and AWS ELBs, all of these usages are to be replaced with automated provisioning and renewal of letsencrypt signed certs.

Provisioning and auto-renewing Apache and nginx TLS/SSL certs

For externally accessible sites where Apache/Nginx handles TLS/SSL termination moving to letsencrypt is quick and simple:

1 – Install the letsencrypt client software (there are RHEL and Centos rpms – so thats as simple as adding the package to puppet policies or

2 – Provision the keys and certificates for each of the required virtual hosts. If a virtual host has aliases, specify multiple names with the -d arg.

This will provision a key and certificate + chain to the letsencrypt home directory (defaults /etc/letsencrypt). The /etc/letsencrypt/live directory contains symlinks to the current keys and certs.

3 – Update the apache/nginx virtualhost configs to use the symlinks maintained by the letsencrypt client, ie:

4 – Create a script for renewing these certs, something like:

5 – Run this script automatically everyday with cron or jenkins

6 – Monitoring the results of the script and externally monitor the expiry dates of your certificates (something will go wrong one day)

Provisioning and auto-renewing AWS Elastice Load Balancer TLS/SSL certs

This has been made very easy by Alex Gaynor with a handy python script: https://github.com/alex/letsencrypt-aws. This is a great use-case for docker and Alex has created a docker image for the script: https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. To use this with ease I created a layer on top creating a new Dockerfile:

The explanation of these values can be found at https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. Its quite important to create a specific IAM User to conduct the required Route53/S3 and ELB actions. This images need to be build on changes:

With this image built another cron or jenkins job can be run daily executing something like:

Again, the job must be monitored along with external monitoring of certificates. See a complete SSL checker at https://github.com/markz0r/tools/tree/master/ssl_check_complete.

Configuring Snort Rules

Some reading before starting:

Before setting out, getting some basic concepts about snort is important.

This deployment with be in Network Intrusion Detection System (NIDS) mode – which performs detection and analysis on traffic. See other options and nice and concise introduction:  http://manual.snort.org/node3.html.

Rule application order: activation->dynamic->pass->drop->sdrop->reject->alert->log

Again drawing from the snort manual some basic understanding of snort alerts can be found:

116 –  Generator ID, tells us what component of snort generated the alert

Eliminating false positives

After running pulled pork and using the default snort.conf there will likely be a lot of false positives. Most of these will come from the preprocessor rules. To eliminate false positives there are a few options, to retain maintainability of the rulesets and the ability to use pulled pork, do not edit rule files directly. I use the following steps:

  1. Create an alternate startup configuration for snort and barnyard2 without -D (daemon) and barnyard2 config that only writes to stdout, not the database. – Now we can stop and start snort and barnyard2 quickly to test our rule changes.
  2. Open up the relevant documentation, especially for preprocessor tuning – see the ‘doc’ directory in the snort source.
  3. Have some scripts/traffic replays ready with traffic/attacks you need to be alerting on
  4. Iterate through reading the doc, making changes to snort.conf(for preprocessor config), adding exceptions/suppressions to snort’s threshold.conf or PulledPork’s disablesid, dropsid, enablesid, modifysid confs for pulled pork and running the IDS to check for false positives.

If there are multiple operating systems in your environment, for best results define ipvars to isolate the different OSs. This will ensure you can eliminate false positives whilst maintaining a tight alerting policy.

HttpInspect

From doc: HttpInspect is a generic HTTP decoder for user applications. Given a data buffer, HttpInspect will decode the buffer,  find HTTP fields, and normalize the fields. HttpInspect works on both client requests and server responses.

Global config –

Custom rules

Writing custom rules using snorts lightweight rules description language enables snort to be used for tasks beyond intrusion detection. This example will look at writing a rule to detect Internet Explorer 6 user agents connecting to port 443.

Rule Headers -> [Rule Actions, Protocols, IP Addresses and ports, Direction Operator,

Rule Options -> [content: blah;msg: blah;nocase;HTTP_header;]

Rule Option categories:

  • general – informational only — msg:, reference:, gid:, sid:, rev:, classtype:, priority:, metadata:
  • payload – look for data inside the packet —
    • content: set rules that search for specific content in the packet payload and trigger a response based on that data (Boyer-Moore pattern match). If there is a match anywhere within the packets payload the remainder of the rule option tests are performed (case sensitive). Can contain mixed text and binary data. Binary data is represented as hexdecimal with pipe separators — (content:”|5c 00|P|00|I|00|P|00|E|00 5c|”;). Multiple content rules can be specified in one rule to reduce false positives. Content has a number of modifiers: [nocase, rawbytes, depth, offset, distance, within, http_client_body, http_cookie, http_raw_cookie, http_header, http_raw_header, http_method, http_uri, http_raw_uri, http_stat_code, http_stat_msg, fast_pattern.
  • non-payload – look for non-payload data
  • post-detection – rule specific triggers that are enacted after a rule has been matched

Validating certificate chains with openssl

Using openssl to verfiy certificate chains is pretty straight forward – see a full script below.

One thing that confused me for a bit was how to specify trust anchors without importing them to the pki config of the os (I also did not want to accept all of the trust anchors).

So.. here what to do for specif trust anchors

So here’s a simple script that will pull the cert chain from a [domain] [port] and let you know if it is invalid – note there will likely be come bugs from characters being encoded / return carriages missing:

SSL Review part 2

RSA in practice

Initializing SSL/TLS with https://youtube.com

In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):

Packet capture download: http://mchost/sourcecode/security_notes/youtube_TLSv1.1_handshake_filtered.pcap

The packet capture starts with the TCP three-way handshake – Frames 1-3

With a TCP connection established the TLS handshake begins, Negotiation phase:

  1. ClientHello – Frame 4 – A random number[90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7], Cipher suites, compression methods and session ticket (if reconnecting session).
  2. ServerHello – Frame 6 – chosen protocol version [TLS 1.1], random number [1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78], CipherSuite [TLS_ECDHE_ECDSA_WITH_RC4_128_SHA], Compression method [null], SessionTicket [null]
  3. Server send certificate message (depending on cipher suite)
  4. Server sends ServerHelloDone
  5. Client responds with ClientKeyExchange containing PreMasterSecret, public key or nothing. (depending on cipher suite) – PreMasterSecret is encrypted using the server public key
  6. Client and server use the random numbers and PreMsterSecret to compute a common secret – master secret
  7. Client sends ChangeCipherSpec record
  8. Client sends authenticated and encrypted Finished – contains a hash and MAC of previous handshake message
  9. Server decrypts the hash and MAC to verify
  10. Server sends ChangeCipherSpec
  11. Server sends Finished – with hash and MAC for verification
  12. Application phase – the handshake is now complete, application protocol enable with content type 23

client random: 90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7 = 10447666340000000000

server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608

Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.

Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/

nf_conntrack: table full, dropping packet on Nessus server

Issue caused by having iptables rule/s that track connection state. If the number of connections being tracked exceeds the default nf_conntrack table size [65536] then any additional connections will be dropped. Most likely to occur on machines used for NAT and scanning/discovery tools (such as Nessus and Nmap).

Symptoms: Once the connection table is full any additional connection attempts will be blackholed.

 

This issue can be detected using:

Current conntrack settings can be displayed using:

To check the current number of connections being tracked by conntrack:

Options for fixing the issue are:

  1. Stop using stateful connection rules in iptables (probably not an option in most cases)
  2. Increase the size of the connection tracking table (also requires increasing the conntrack hash table)
  3. Decreasing timeout values, reducing how long connection attempts are stored (this is particularly relevant for Nessus scanning machines that can be configured to attempt many simultaneous port scans across an IP range)

 

Making the changes in a persistent fashion RHEL 6 examples:

These changes will persist on reboot.

To apply changes without reboot run the following:

To review changes:

Reference and further reading: http://antmeetspenguin.blogspot.com.au/2011/01/high-performance-linux-router.html

Setting secure, httpOnly and cache control headers using ModSecurity

Many older web applications do not apply headers/tags that are now considered standard information security practices. For example:

  • Pragma: no-cache
  • Cache-Control: no-cache
  • httpOnly and secure flags

Adding these controls can be achieved using ModSecurity without any needs to modify the application code.

In the case where I needed to modify the cookie headers to include these now controls I added the following to core rule set file: modsecurity_crs_16_session_hijacking.conf.

 

This adds the cookie controls we were after – Depending on your web application you may need to change ‘JSESSIONID’ to the name of the relevant cookie.

You can find the cookie name simply using browser tools such as Chrome’s Developer Tools (hit F12 in chrome). Load the page you want to check cookies for, click on the Resources tab:

ChromeCookies

After setting the HTTPOnly and Secure flags you can check the effectiveness using the Console table and listing the document cookies… which should now return nothing.

document.cookie

Migrating to EJBCA from OpenSSL and TinyCA

Install and configure EJBCA

EJBCA 6.0.3 – http://www.ejbca.org/download.html

JBoss AS 7.1.1 Final – http://download.jboss.org/jbossas/7.1/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final.zip

Prereqs:

Ref:

Detailed deployment guide: http://majic.rs/book/free-software-x509-cookbook/setting-up-ejbca-as-certification-authority

EJBCA doc: http://wiki.ejbca.org/

Architecture

Recommended architecture (source: http://ejbca.org/architecture.html)

Import existing OpenSSL CA

Step 1 – Export the OpenSSL priv key and cert to a PKCS#12 keystore:

Step 2 – Import the PKCS#12 keystore to EJBCA CA

Step 3 – Verify import

### IMPORTANT ###

Distinguished name order of openssl may be opposite of ejbca default configuration – http://www.csita.unige.it/software/free/ejbca/ … If so, this ordering must changed in ejbca configuration prior to deploying (can’t be set on a per CA basis)

Have not been able to replicate this issue in testing.

Import existing TinyCA CA

Basic Admin and User operations

Create and end entity profile for server/client entities

Step 1 – Create a Certificate Profile (http://wiki.ejbca.org/certificateprofiles)

Step 2 – Create and End Entity Profile (http://wiki.ejbca.org/endentityprofiles)

* EndEntities can be deleted using:

Issuing certificates from CSRs

End entities need to be created for clients/servers that require certificates signed by our CA.

Step 1 – Create and End Entity (http://ejbca.org/userguide.html#Issue a new server certificate from a CSR)

Step 2 – Sign CSR using the End Entity which is associated with a CA

Importing existing certificates

EJBCA can create endentities and import their existing certificate one-by-one or in bulk (http://www.ejbca.org/docs/adminguide.html#Importing Certificates). Bulk inserts import all certificates under a single user which may not be desirable. Below is a script to import all certs in a directory one by one under a new endentity which will take the name of the certificate CN.

Creating administrators

Create administrators that can sign CSR and revoke certificates: http://ejbca.org/userguide.html#Administrator%20roles

Revoking certificates

Checking certificate validity/revoke status via OSCP

Monitoring expiring certs

 

Getting started with ModSecurity

XSS, CSRF and similar types of web application attacks have overtaken SQL injections as the most commonly seen attacks on the internet (https://info.cenzic.com/2013-Application-Security-Trends-Report.html). A very large number of web application were written and deployed prior to the trend up in likelihood and awareness of XSS attacks. Thus, it is extremely important to have an effective method of testing for XSS vulnerabilities and mitigating them.

Changes to production code bases can be slow, costly and can miss unreported vulnerabilities quite easily. The use of application firewalls such as ModSecurity (https://github.com/SpiderLabs/ModSecurity/) become an increasingly attractive solution when faced with a decision on how to mitigate current and future XSS vulnerabilities.

Mod Security can be embedded with Apache, NGINX and IIS which is relativity straight forward. In cases where alternative web severs are being used ModSecurity can still be a viable option by creating a reverse proxy (using Apache of NGINX).

How can ModSecurity be used?

  • Alerting
  • Transforming
  • Blocking
  • and more

These functions can be enacted by rules.

A default action can be created for a group of rules using the configuration directive “SecDefaultAction

Using the following SecDefaultAction at the top of rule set that we want enable blocking and transforming on is a blunt method of protection. Redirection can also be used as a method of blocking.

A powerful web application firewall - free software!
A powerful web application firewall – free software!

Example of a default action to be applied by ruleset (note defaults cascade through the ruleset files):

SecDefaultAction “phase:2,log,auditlog,deny,status:403,tag:’Unspecified usage'”

Rulesets have been created by OWASP.

Using optional rulesets,  modsecurity_crs_16_session_hijacking.conf and modsecurity_crs_43_csrf_protection.conf ModSecurity can provide protection against Cross Site Request Forgeries [CSRF]. The @rsub operators can inject a token on every form (and/or other html elements). ModSecurity can store the expected token value as a variable which is compared to the value posted via forms or other html elements. ModSecurity rules can be based on request methods and URIs etc – alongside the ability to chain rules there are a huge number of options for mitigating XSS and CSRF without impacting normal applicatioin usage.

@rsub

Requirements:

  • SecRuleEngine On
  • SecRequestBodyAccess On
  • SecResponseBodyAccess On

## To enable @rsub

  • SecStreamOutBodyInspection On
  • SecStreamInBodyInspection On
  • SecContentInjection On

Injecting unique request id from mod_unique_id into forms:

Some simple rules:

Pros:

  • Wide capabilities for logging, alerts, blocking, redirecting, transforming
  • Parses everything coming into your web server over HTTP
  • Virtual patching – if a vulnerability is made public that affects your web application you can write and deploy a rule to mitigate the vulnerability much faster than re-release of application code patched
  • Extended uses – the capabilities of ModSecurity can be applied to applications outside the scope of application security

Cons:

  • Added complexity to your application delivery chain – another point for maintenance and failure
  • Performance costs? – Though I have not had the opportunity to test the performance costs holding session information in memory and inspecting every byte of HTTP traffic can’t be free from performance cost
  • Hardware costs – Particularly if using ModSecurity’s BodyAccess and BodyInspection features, memory usage will be significant

Improving deployments:

  • Starting off being aggressive on warnings and very light on action is a necessity to ensure no impact on normal application usage
  • From this point rules and actions need to be refined
  • Understanding how the applications works allows the use of ModSecuirtys header and body inspection in effective ways

Some other notes extracted from the ModSecurity Handbook – If you decide to use ModSecurity I strongly recommend buying the handbook. It is not expensive and saves a lot of time.

### RULE STRUCTURE ###
SecRule VARIABLES OPERATOR [TRANSFORMATION_FUNCTIONS, ACTIONS]