Categories
InfoSec Notes

Eramba bulk reviews with SQL queries

We utilise the open GRC source tool, Eramba; some instances of which are the community edition which does not have an API interface. In some scenarios it is desirable make bulk updates/completions of reviews (and potentially update status of audits).

Doing this is slightly less trivial than expected… steps are:

  • Generate a list of review ids and foreign keys (maps review objects to model object [Asset, SecurityPolicy, ThirdPartyRisk, Risk]
  • Update the relevant review objects with completion date, comments, completed status
  • Create new reviews entries for next cycle
  • Update the model object [Asset, SecurityPolicy, ThirdPartyRisk, Risk] to reference the new, next review date
  • Update the object status mapping for the updated reviews (expired and current statuses in this case)
  • Validate your updated via the web interface
    • No clearing of cache of waiting for jobs is required
-- Generate a list of review ids and foreign keys (in this case the foreign keys are for the associated Assets being reviewed):

SELECT CONCAT_WS(',',Model,id,foreign_key) from reviews where planned_date = '2022-04-19' and model = 'Asset';

-- Update the relevant review objects as desired:

update reviews set actual_date = '2022-04-20',user_id = 2, description = 'No changes to store [components, customer, deployment etc, all same] added to ISMF Agenda ', completed = 1, modified = now(), edited = now()  where id in (select id from reviews where planned_date = '2022-04-19' and model = 'Asset');

-- Create new reviews (this is usually done by the app when completing the review via the web interface, review objects need a Model [Asset, SecurityPolicy, ThirdPartyRisk, Risk] (other models have audits):

INSERT INTO 
	reviews(model, foreign_key, planned_date, completed, created, modified, deleted)
VALUES
('Asset',115,'2023-04-19',0,now(),now(),0),
('Asset',116,'2023-04-19',0,now(),now(),0),
...;

-- Update the Asset object to reference the new, next review date (get asset ID from query used to get review id + foreign_key)
update asset set review = '2023-04-19', expired_reviews = 0 where id in (select foreign_key from reviews where planned_date = '2022-04-19' and model =  'Asset');

-- Eramba has object statuses, the list of available statuses is defined in the object_status_statuses table; the mapping of objects to statuses is in the table: object_status_object_statuses
-- Note there will be better, safer queries to do this..:

-- Check the results select query so we know what we are updating
select * from object_status_object_statuses where foreign_key in (select id from reviews where planned_date = '2022-04-19' and model =  'Asset') and model = 'AssetReview' and name = 'expired';

-- Update the object status mappings for relevant items, in this case there are two statuses that need to be updated, the expired status (now should be 0_ and the current status (now should be 1)

update object_status_object_statuses set status = 0 where id in (select id from object_status_object_statuses 
where foreign_key in (select id from reviews where planned_date = '2022-04-19' and model =  'Asset') and model = 'AssetReview' and name = 'expired');

update object_status_object_statuses set status = 1 where id in (select id from object_status_object_statuses 
where foreign_key in (select id from reviews where planned_date = '2022-04-19' and model =  'Asset') and model = 'AssetReview' and name = 'current_review');

Categories
InfoSec Notes ITOps Random

Eramba Community 2019 in Docker (docker-compose)

Eramba is an excellent open source Governance Risk and Compliance tool. Recently (10-SEP-2019), a new major release of the community version came out. Previously I used https://github.com/digitorus/eramba which was based on https://hub.docker.com/r/k0st/alpine-eramba/ to start eramba instances quickly with docker and docker-compose.

As I could not find an updated version of these for the new release I have made one. The repo for this, 2019 community version (specifically c2.4.1) can be found here: https://github.com/markz0r/eramba-community-docker

Follow the steps in README.md and you should be testing the new eramba in no time.

Mar, 2020: Updated for community edition 2.8.1

Thanks to the team at Eramba for making the tool available for all!

Categories
InfoSec Notes ITOps

OWASP Top 10 using AWS WAF Service

We have a web application that has been running on AWS for several years. As application load balancers and the AWS WAF service was not available, we utilised and external classic ELB point to a pool of EC2 instances running mod_security as our WAF solution. Mod_security was using the OWASP Mod_security core rule set.

Now that Application Load Balancers and AWS WAFs are available, we would like to remove the CPU bottleneck which stems from using EC2 instances with mod security as the current WAF.


Step 1 – Base-lining performance with EC2 WAF solution.

The baseline was completed using https://app.loadimpact.com where we ran 1000 concurrent users, with immediate rampup. On our test with 2 x m5.large EC2 instances as the WAF, the WAFs became CPU pinned within 2mins 30 seconds.

This test was repeated with the EC2 WAFs removed from the chain and we averaged 61ms across the loadimpact test with 1000 users. So – now we need to implement the AWS WAF solution so that can be compared.


Step 2 – Create an ‘equivalent’ rule-set and start using AWS WAF service.

We used terraform for this environment so the CloudFormation web ACL and rules are not being used and I will start be testing out the terraform code upload by traveloka. After having a look at the code in more detail I decided I need to get a better understanding of the terraform modules (and the AWS service) so I will write some terraform code from scratch.

So – getting started with the AWS WAF documentation we read, ‘define your conditions, combine your conditions into rules, and combine the rules into a web ACL.

  • Conditions: Request strings, source IPs, Country/Geo location of request IP, Length of specified parts of the requests, SQL code (SQL injection), header values (i.e.: User-Agent). Conditions can be multiple values and regex.
  • Rules: Combinations of conditions along with an ACTION (allow/block/count). There are Regular rules whereby conditions can be and/or chained. Rate-based rules where by the addition of a rate-based condition can be added.
  • Web ACLs: Whereby the action for rules are defined. Multiple rules can have the same action, thus be grouped in the same ACL. The WAF uses Web ACLs to assess requests against rules in the order which the rules are added to the ACL, whichever/if any rules is matched first defines which action is taken.

Starting simple: To get started I will implement a rate limiting rule which limits 5 requests per minute to our login page from a specified IP along with the basic OWASP rules from terraform code upload by traveloka . Below is our main.tf with the aws_waf_owasp_top_10_rules created for this test.


Step 3 – Validate functions of AWS WAF

To confirm blocking based on the rate limiting rule I am using Apache’s Benchmarking tool, ab.

ab -v 3 -n 2000 -c 100 https://<my_target.com.au>/login  > ab_2000_100_waf_test.log

This command logs request headers (-v 3 for verbosity of output), makes 2000 requests (-n 2000) and conducts those request 100 concurrently (-c 100). I can then see failed requests by tailing the output:

tail -f ./ab_2000_100_waf_test.log  | grep -i response

All looks good for the rate limiting based blocking, though it appears that blocking does not occur are exactly 2000 requests in the 5 minute period. It also appears that there is a significant (5-10min) delay on metrics coming through to the WAF stats in the AWS console.

AWS console about 10 mins after running the HTTP AB tool we can see blocks

The blocks are HTTP 403 responses from the ELB:

WARNING: Response code not 2xx (403)
LOG: header received:
HTTP/1.1 403 Forbidden
Server: awselb/2.0
Date: Mon, 01 Jul 2019 22:39:11 GMT
Content-Type: text/html
Content-Length: 134
Connection: close

After success on the rate limiting rule, the OWASP Top 10 mitigation rules need to be tested. I will use Owasp Zap to generate some malicious traffic and see when happen!

So it works – which is good, but I am not really confident about the effectiveness of the OWASP rules (as implemented on the AWS WAF). For now, they will do… but some tuning will probably be desirable as all of the requests OWASP ZAP made contained (clearly) malicious content but only 7% (53 / 755) of the requests were blocked by the WAF service. It will be interesting to see if there are false positives (valid requests that are blocked) when I conduct step 4, performance testing.


Step 4 – Conduct performance test using AWS WAF service, and

Conducting a load test with https://app.loadimpact.com demonstrated that the AWS WAF service is highly unlikely to become a bottleneck (though this may differ for other applications and implementations).


Step 5 – Migrate PROD to the AWS WAF service.

Our environment is fully ‘terraformed’, implementing the AWS WAF service as part of our terraform code was working within an hour or so (which is good time for me!).


Next Steps

Security Automatons: https://aws.amazon.com/solutions/aws-waf-security-automations/, is this easy to do with Terraform? https://github.com/awslabs/aws-waf-security-automations has:

  • waf-reactive-blacklist
  • waf-bad-bot-blocking
  • waf-block-bad-behaving
  • waf-reputation-lists
Categories
InfoSec Notes ITOps

AWS RDS (Oracle 12c) Offsite Backups

A lot of people need to do offsite backups for AWS RDS – which can be done trivially within AWS. If you need offsite backups to protect you against things like AWS account breach or AWS specific issues – offsite backups must include diversification of suppliers.

I am going to use Amazon’s Data Migration service to replicate AWS RDS data to a VM running in Azure and set up snapshots/backups of the Azure hosts.

The new (2018) AWS Data Migration Service solve offisite RDS backup problems

The steps I used to do this are:

  1. Set up an Azure Windows 2016 VM
  2. Create an IPSec tunnel between the Azure Windows 2016 VM and my AWS Native VPN
  3. Install matching version of Oracle on the Windows 2016 VM
  4. Configure Data Migration service
  5. Create a data migration and continuous replication task
  6. Snapshots/Backups and Monitoring
  7. Debug and Gotchyas

1,2 – Set up Azure Windows 2016 VM and IPSec tunnel

Create Network on Azure and place a VM in the network with 2 interfaces. One interface must have an public IP, call this one ‘external’ and the other inteface will be called ‘internal’ – Once you have the public IP address of your Windows 2016 VM, create a ‘Customer Gateway’ in your AWS VPC pointing to that IP. You will also need a ‘Virual Private Gateway’ configured for that VPC. Then create a ‘Site-to-Site VPN connection’ in your VPC (it won’t connect for now but create it anyway). Configure your Azure Win 2016 VM to make an IPSec tunnel by following these instructions (The instructions are for 2012 R2 but the only tiny difference is some menu items):
https://docs.aws.amazon.com/vpc/latest/adminguide/customer-gateway-windows-2012.html#cgw-win2012-download-config. Once this is completed both your AWS site-to-site connection and your Azure VM are trying to connect to each other. Ensure that the Azure VM has its security groups configured to allow your AWS site-to-site vpn to get to the Azure VM (I am not sure which ports and protocols specifically, I just white-listed all traffic from the two AWS tunnel end points. Once this is done it took around 5 mins for the tunnel to come up (I was checking the status via the AWS Console), I also found that it requires traffic to be flowing over the link, so I was running a ping -t <aws_internal_ip> from my Azure VM. Also note that you will need to add routes to your applicable AWS route tables and update AWS security groups for the Azure subnet as required.

3 – Install matching version of Oracle on the Windows 2016 VM

4,5 – Configure Data Migration service and migration/replication

Log into your AWS console and go to ‘Data Migration Service’ / ‘DMS’ and hit get started. You will need to set up a replication VM (well atleast pick a size, security group, type etc). Note that the security group that you add the replication host to must have access to both your RDS and your Azure DBs – I could not pick which subnet the host went into so I had to add routes for a couple more subnets that expected. Next you will need to add your source and target databases. When you add in the details and hit test the wizard will confirm connectivity to both databases. I ran into issue on both of these points because of not adding the correct security groups, the windows firewall on the Azure VM and my VPN link dropping due to no traffic (I am still investigating a fix better than ping -t for this). Next you will be creating a migration/replication task, if you are going to be doing ongoing replication you need to run the following on your Oracle RDS db:

  • exec rdsadmin.rdsadmin_util.set_configuration(‘archivelog retention hours’, 24);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘ADD’,’ALL’);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘DROP’,’PRIMARY KEY’);

You can filter by schema, which should provide you with a drop down box to select which schema/s. Ensure that you enable logging on the migration/replication task (if you get errors, which I did the first couple of attempts, you won’t be fixing anything without the logs.

6 – Snapshots and Monitoring

For my requirements, daily snapshots/backups of the Azure VM will provide sufficient coverage. The Backup vault must be upgraded to v2 if you are using a Standrd SSD disk on the Azure VM, see:
https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade . To enable email notifications for Azure backups, go to the azure portal, select the applicable vault, click on ‘view alerts’ -> ‘Configure notifications’ -> enter an email address and check ‘critical’ (or what type of email notifications you want. Other recommended monitoring checks include: ping for VPN connectivity, status check of DMS task (using aws cli), SQL query on destination database confirming latest timestamp of a table that should have regular updates.

7 – Debug and Gotchyas

  • Azure security group allowing AWS vpn tunnel endpoint to Azure VM
  • Windows firewall rule on VM allowing Oracle traffic (default port 1521) from AWS RDS private subnet
  • Route tables on AWS subnets to route traffic to your Azure subnet via the Virtual Private Network
  • Security groups on AWS to allow traffic from Azure subnet
  • Stability of the AWS <–> Azure VM site-to-site tunnel requires constant traffic
  • The DMS replication host seems to go into an arbitrary subnet of your VPC (there probably some default setting I didn’t see) but check this and ensure it has routes for the Azure site-to-site
  • Ensure the RDS Oracle database has the archive log retention and supplemental logs settings as per steps 4,5.
  • Azure backup job fails with ‘Currently Azure Backup does not support Standard SSD disks’. – upgrade backup vault: https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade
Categories
InfoSec Notes ITOps

Transitioning from standard CA to LetEncrypt!

With the go-live of https://letsencrypt.org/ its time to transition from the pricy and manual standard SSL cert issuing model to a fully automated process using the ACME protocol. Most orgs have numerous usages of CA purchased certs, this post will cover hosts running apache/nginx and AWS ELBs, all of these usages are to be replaced with automated provisioning and renewal of letsencrypt signed certs.

Provisioning and auto-renewing Apache and nginx TLS/SSL certs

For externally accessible sites where Apache/Nginx handles TLS/SSL termination moving to letsencrypt is quick and simple:

1 – Install the letsencrypt client software (there are RHEL and Centos rpms – so thats as simple as adding the package to puppet policies or

yum install letsencrypt

2 – Provision the keys and certificates for each of the required virtual hosts. If a virtual host has aliases, specify multiple names with the -d arg.

letsencrypt certonly --webroot -w /var/www/sites/static -d static.mwclearning.com -d img.mwclearning.com

This will provision a key and certificate + chain to the letsencrypt home directory (defaults /etc/letsencrypt). The /etc/letsencrypt/live directory contains symlinks to the current keys and certs.

3 – Update the apache/nginx virtualhost configs to use the symlinks maintained by the letsencrypt client, ie:

# static Web Site

	ServerName static.mwclearning.com
	ServerAlias img.mwclearning.com
	ServerAlias registry.mwclearning.ninja # <<-- dummy alias for internal site
	ServerAdmin [email protected]

	DocumentRoot /var/www/sites/static
	DirectoryIndex index.php index.html
        
		AllowOverride all
		Options +Indexes
        
	ErrorLog /var/log/httpd/static_error.log
	LogLevel warn
	CustomLog /var/log/httpd/static_access.log combined



	ServerName static.mwclearning.com
	ServerAlias img.mwclearning.com
	ServerAlias img.mwclearning.ninja
	ServerAdmin [email protected]

	DocumentRoot /var/www/sites/static
	DirectoryIndex index.php index.html
        
		AllowOverride all
		Options +Indexes	
        
	ErrorLog /var/log/httpd/static_ssl_error.log
	LogLevel warn
	CustomLog /var/log/httpd/static_ssl_access.log combined

	SSLEngine on
	SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
	SSLHonorCipherOrder on
	SSLInsecureRenegotiation off
	SSLCertificateKeyFile /etc/letsencrypt/live/static.mwclearning.com/privkey.pem
	SSLCertificateFile /etc/letsencrypt/live/static.mwclearning.com/cert.pem
	SSLCertificateChainFile /etc/letsencrypt/live/static.mwclearning.com/chain.pem

4 – Create a script for renewing these certs, something like:

#!/bin/bash
# Vars
PROG_ECHO=$(which echo)
PROG_LETSENCRYPT=$(which letsencrypt)
PROG_FIND=$(which find)
PROG_OPENSSL=$(which openssl)

#
# Main
#
${PROG_ECHO} "Current expiries: "
for x in $(${PROG_FIND} /etc/letsencrypt/live/ -name cert.pem); do ${PROG_ECHO} "$x: $(${PROG_OPENSSL} x509 -noout -enddate -in $x)";done
${PROG_ECHO} "running letsencrypt certonly --webroot .. on $(hostname)"
${PROG_LETSENCRYPT} renew --agree-tos
LE_STATUS=$?
systemctl restart httpd
if [ "$LE_STATUS" != 0 ]; then
    ${PROG_ECHO} Automated renewal failed:
    cat /var/log/letsencrypt/renew.log
    exit 1
else
    ${PROG_ECHO} "New expiries: "
    for x in $(${PROG_FIND} /etc/letsencrypt/live/ -name cert.pem); do echo "$x: $(${PROG_OPENSSL} x509 -noout -enddate -in $x)";done
fi
# EOF

5 – Run this script automatically everyday with cron or jenkins

6 – Monitoring the results of the script and externally monitor the expiry dates of your certificates (something will go wrong one day)

Provisioning and auto-renewing AWS Elastice Load Balancer TLS/SSL certs

This has been made very easy by Alex Gaynor with a handy python script: https://github.com/alex/letsencrypt-aws. This is a great use-case for docker and Alex has created a docker image for the script: https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. To use this with ease I created a layer on top creating a new Dockerfile:

#
# mwc letsencrypt-aws image
#
FROM alexgaynor/letsencrypt-aws:latest

MAINTAINER Mark

ENV LETSENCRYPT_AWS_CONFIG="{\"domains\": \
[{\"elb\":{\"name\":\"TestExtLB\",\"port\":\"443\"}, \
\"hosts\":[\"test.mwc.com\",\"test-app.mwc.com\",\"test-api.mwc.com\"], \
\"key_type\":\"rsa\"}, \
{\"elb\":{\"name\":\"ProdExtLb\",\"port\":\"443\"}, \
\"hosts\":[\"app.mwc.com\",\"show.mwc.com\",\"show-app.mwc.com\", \
\"app-api.mwc.com\",\"show-api.mwc.com\"], \
\"key_type\":\"rsa\"}], \
\"acme_account_key\":\"s3://config-bucket-abc123/config_items/private_key.pem\"}"

ENV AWS_ACCESS_KEY_ID="<AWS_ACCESS_KEY_ID>"
ENV AWS_SECRET_ACCESS_KEY="<AWS_SECRET_ACCESS_KEY>"
ENV AWS_DEFAULT_REGION="ap-southeast-2"
# EOF

The explanation of these values can be found at https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. Its quite important to create a specific IAM User to conduct the required Route53/S3 and ELB actions. This images need to be build on changes:

sudo docker build -t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws .
sudo docker push registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws

With this image built another cron or jenkins job can be run daily executing something like:

sudo docker pull registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws
sudo docker run registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws
sleep 10
sudo docker rm $(sudo docker ps -a | grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws | awk '{print $1}')

Again, the job must be monitored along with external monitoring of certificates. See a complete SSL checker at https://github.com/markz0r/tools/tree/master/ssl_check_complete.

Categories
InfoSec Notes

Configuring Snort Rules

Some reading before starting:

Before setting out, getting some basic concepts about snort is important.

This deployment with be in Network Intrusion Detection System (NIDS) mode – which performs detection and analysis on traffic. See other options and nice and concise introduction:  http://manual.snort.org/node3.html.

Rule application order: activation->dynamic->pass->drop->sdrop->reject->alert->log

Again drawing from the snort manual some basic understanding of snort alerts can be found:

    [**] [116:56:1] (snort_decoder): T/TCP Detected [**]

116 –  Generator ID, tells us what component of snort generated the alert

Eliminating false positives

After running pulled pork and using the default snort.conf there will likely be a lot of false positives. Most of these will come from the preprocessor rules. To eliminate false positives there are a few options, to retain maintainability of the rulesets and the ability to use pulled pork, do not edit rule files directly. I use the following steps:

  1. Create an alternate startup configuration for snort and barnyard2 without -D (daemon) and barnyard2 config that only writes to stdout, not the database. – Now we can stop and start snort and barnyard2 quickly to test our rule changes.
  2. Open up the relevant documentation, especially for preprocessor tuning – see the ‘doc’ directory in the snort source.
  3. Have some scripts/traffic replays ready with traffic/attacks you need to be alerting on
  4. Iterate through reading the doc, making changes to snort.conf(for preprocessor config), adding exceptions/suppressions to snort’s threshold.conf or PulledPork’s disablesid, dropsid, enablesid, modifysid confs for pulled pork and running the IDS to check for false positives.

If there are multiple operating systems in your environment, for best results define ipvars to isolate the different OSs. This will ensure you can eliminate false positives whilst maintaining a tight alerting policy.

HttpInspect

From doc: HttpInspect is a generic HTTP decoder for user applications. Given a data buffer, HttpInspect will decode the buffer,  find HTTP fields, and normalize the fields. HttpInspect works on both client requests and server responses.

Global config –

Custom rules

Writing custom rules using snorts lightweight rules description language enables snort to be used for tasks beyond intrusion detection. This example will look at writing a rule to detect Internet Explorer 6 user agents connecting to port 443.

Rule Headers -> [Rule Actions, Protocols, IP Addresses and ports, Direction Operator,

Rule Options -> [content: blah;msg: blah;nocase;HTTP_header;]

Rule Option categories:

  • general – informational only — msg:, reference:, gid:, sid:, rev:, classtype:, priority:, metadata:
  • payload – look for data inside the packet —
    • content: set rules that search for specific content in the packet payload and trigger a response based on that data (Boyer-Moore pattern match). If there is a match anywhere within the packets payload the remainder of the rule option tests are performed (case sensitive). Can contain mixed text and binary data. Binary data is represented as hexdecimal with pipe separators — (content:”|5c 00|P|00|I|00|P|00|E|00 5c|”;). Multiple content rules can be specified in one rule to reduce false positives. Content has a number of modifiers: [nocase, rawbytes, depth, offset, distance, within, http_client_body, http_cookie, http_raw_cookie, http_header, http_raw_header, http_method, http_uri, http_raw_uri, http_stat_code, http_stat_msg, fast_pattern.
  • non-payload – look for non-payload data
  • post-detection – rule specific triggers that are enacted after a rule has been matched
Categories
InfoSec Notes

Validating certificate chains with openssl

Using openssl to verfiy certificate chains is pretty straight forward – see a full script below.

One thing that confused me for a bit was how to specify trust anchors without importing them to the pki config of the os (I also did not want to accept all of the trust anchors).

So.. here what to do for specif trust anchors

# make a directory and copy in all desired trust anchors
# make sure the certs are in pem format, named <bah>.pem
mkdir ~/trustanchors
# create softlinks with hash 
cd ~/trustanchors
for X in ./*.pem;do ln -s $X ./`openssl x509 -hash -noout -in $X`.0;done

# confirm the trust anchor(s) are working as expected
openssl verify -CApath ~/trustanchors -CAfile <some_intermediate>.pem <my_leaf>.pem

So here’s a simple script that will pull the cert chain from a [domain] [port] and let you know if it is invalid – note there will likely be come bugs from characters being encoded / return carriages missing:

#!/bin/bash

# chain_collector.sh [domain] [port]
# output to stdout
# assumes you have a directory with desired trust anchors at ~/trustanchors

if [ $# -ne 2 ]; then
	echo "USAGE: chain_collector.sh [domain] [port]"
	exit 1
fi

TRUSTANCHOR_DIR="~/trustanchors"
SERVER=$1:$2
TFILE="/tmp/$(basename $0).$$.tmp"
OUTPUT_DIR=$1_$2
mkdir $OUTPUT_DIR

openssl s_client -showcerts -servername $1 -connect $SERVER 2>/dev/null > $TFILE
awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "tmpcert." c ".pem"}' < $TFILE 
i=1 
for X in tmpcert.*.pem; do
    if openssl x509 -noout -in $X 2>/dev/null ; then 
        echo "#############################"
        cn=$(openssl x509 -noout -subject -in $X | sed -e 's#.*CN=\(\)#\1#')
	echo CN: $cn
	cp $X $OUTPUT_DIR/${cn// /_}.$((i-1)).pem 
	cert_expiry_date=$(openssl x509 -noout -enddate -in $X \
			| awk -F= ' /notAfter/ { printf("%s\n",$NF); } ')
	seconds_until_expiry=$(echo "$(date --date="$cert_expiry_date" +%s) \ 
                                     - $(date +%s)" |bc)
        days_until_expiry=$(echo "$seconds_until_expiry/(60*60*24)" |bc)
	echo Days until expiry: $days_until_expiry
	echo $(openssl x509 -noout -text -in $X | \ 
                grep -m1 "Signature Algorithm:" | head)
	echo $(openssl x509 -noout -issuer -in $X)
	if [ -a tmpcert.$i.pem ]; then
		echo Parent: $(openssl x509 -noout -subject \ 
                                   -in tmpcert.$i.pem | sed -e 's#.*CN=\(\)#\1#')
	        echo Parent Valid? $(openssl verify -verbose -CAfile tmpcert.$i.pem $X)	
	else
		echo "Parent Valid? This is the trust anchor"
	fi
	echo "#############################"
    fi
    ((i++))
done
rm -f tmpcert.*.pem $TFILE
Categories
InfoSec Notes

SSL Review part 2

RSA in practice

Initializing SSL/TLS with https://youtube.com

In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):

Packet capture download: http://mchost/sourcecode/security_notes/youtube_TLSv1.1_handshake_filtered.pcap

The packet capture starts with the TCP three-way handshake – Frames 1-3

With a TCP connection established the TLS handshake begins, Negotiation phase:

  1. ClientHello – Frame 4 – A random number[90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7], Cipher suites, compression methods and session ticket (if reconnecting session).
  2. ServerHello – Frame 6 – chosen protocol version [TLS 1.1], random number [1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78], CipherSuite [TLS_ECDHE_ECDSA_WITH_RC4_128_SHA], Compression method [null], SessionTicket [null]
  3. Server send certificate message (depending on cipher suite)
  4. Server sends ServerHelloDone
  5. Client responds with ClientKeyExchange containing PreMasterSecret, public key or nothing. (depending on cipher suite) – PreMasterSecret is encrypted using the server public key
  6. Client and server use the random numbers and PreMsterSecret to compute a common secret – master secret
  7. Client sends ChangeCipherSpec record
  8. Client sends authenticated and encrypted Finished – contains a hash and MAC of previous handshake message
  9. Server decrypts the hash and MAC to verify
  10. Server sends ChangeCipherSpec
  11. Server sends Finished – with hash and MAC for verification
  12. Application phase – the handshake is now complete, application protocol enable with content type 23

client random: 90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7 = 10447666340000000000

server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608

Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.

Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/

Categories
InfoSec Notes

nf_conntrack: table full, dropping packet on Nessus server

Issue caused by having iptables rule/s that track connection state. If the number of connections being tracked exceeds the default nf_conntrack table size [65536] then any additional connections will be dropped. Most likely to occur on machines used for NAT and scanning/discovery tools (such as Nessus and Nmap).

Symptoms: Once the connection table is full any additional connection attempts will be blackholed.

 

This issue can be detected using:

$dmesg
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
...

Current conntrack settings can be displayed using:

$sysctl -a | grep conntrack
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmpv6_timeout = 30
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_count = 1
net.netfilter.nf_conntrack_buckets = 16384
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_expect_max = 256
net.ipv6.nf_conntrack_frag6_timeout = 60
net.ipv6.nf_conntrack_frag6_low_thresh = 196608
net.ipv6.nf_conntrack_frag6_high_thresh = 262144
net.nf_conntrack_max = 65536

To check the current number of connections being tracked by conntrack:

/sbin/sysctl net.netfilter.nf_conntrack_count

Options for fixing the issue are:

  1. Stop using stateful connection rules in iptables (probably not an option in most cases)
  2. Increase the size of the connection tracking table (also requires increasing the conntrack hash table)
  3. Decreasing timeout values, reducing how long connection attempts are stored (this is particularly relevant for Nessus scanning machines that can be configured to attempt many simultaneous port scans across an IP range)

 

Making the changes in a persistent fashion RHEL 6 examples:

# 2: Increase number of connections
echo "net.netfilter.nf_conntrack_max = 786432" >> /etc/sysctl.conf
echo "net.netfilter.nf_conntrack_buckets = 196608" >> /etc/sysctl.conf
# Increase number of bucket to change ration from 1:8 to 1:4 (more # memory use but better performance)
echo 'echo "196608" > /sys/module/nf_conntrack/parameters/hashsize' >> /etc/rc.local

# 3: Alter timeout values
# Generic timeout from 10 mins to 1 min
echo "net.netfilter.nf_conntrack_generic_timeout = 60" > /etc/sysctl.conf

# Change unacknowledged timeout to 30 seconds (from 10 mins)
echo "net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30" > /etc/sysctl.conf

# Change established connection timeout to 1 hour (from 10 days)
echo "net.netfilter.nf_conntrack_tcp_timeout_established = 3600" > /etc/sysctl.conf

These changes will persist on reboot.

To apply changes without reboot run the following:

sysctl -p
echo "196608" > /sys/module/nf_conntrack/parameters/hashsize

To review changes:

sysctl -a | grep conntrack

Reference and further reading: http://antmeetspenguin.blogspot.com.au/2011/01/high-performance-linux-router.html

Categories
InfoSec Notes

Setting secure, httpOnly and cache control headers using ModSecurity

Many older web applications do not apply headers/tags that are now considered standard information security practices. For example:

  • Pragma: no-cache
  • Cache-Control: no-cache
  • httpOnly and secure flags

Adding these controls can be achieved using ModSecurity without any needs to modify the application code.

In the case where I needed to modify the cookie headers to include these now controls I added the following to core rule set file: modsecurity_crs_16_session_hijacking.conf.

#
# This rule will identify the outbound Set-Cookie SessionID data and capture it in a setsid
#
#addding httpOnly
Header edit Set-Cookie "(?i)^(JSESSIONID=(?:(?!httponly).)+)$" "$1; httpOnly"
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "0"

 

This adds the cookie controls we were after – Depending on your web application you may need to change ‘JSESSIONID’ to the name of the relevant cookie.

You can find the cookie name simply using browser tools such as Chrome’s Developer Tools (hit F12 in chrome). Load the page you want to check cookies for, click on the Resources tab:

ChromeCookies

After setting the HTTPOnly and Secure flags you can check the effectiveness using the Console table and listing the document cookies… which should now return nothing.

document.cookie