Categories
Random

Downloading Google Drive with Python via Drive API

Python script for downloading snapshots of all file in your google drive, including those shared with you.

source: https://github.com/SecurityShift/tools/blob/master/backup_scripts/google_drive_backup.py



		
Categories
Intro to DevOps Online Courses

Intro to DevOps – Lesson 3

Topics:

  • Continuous integration/Delivery (Jenkins)
    • Automate from commits to repo to build to test to deploy
  • Monitoring (Graphite)

At minimum there will be 6 environments

  1. local (dev workstations)
  2. dev (sandbox)
  3. integration (test build and side effects)
  4. test (UAT, Performance, QA may be many environments)
  5. staging (live data? – replication of production)
  6. production

From coding to prod

If the hand over of each of these steps was manuals there are too many opportunities for delays and errors. Sooo:

  • Continuous integration
    • Maintain a code repository (git)
    • Automate the build (Jenkins, TravisCI, CircleCI)
    • Test the build (Jenkins, TravisCI, CircleCI)
    • Commit changes often (manual)
    • Build each commit (Jenkins)
    • Fix bugs immediately (manual)
    • Test in a clone environment (test suites)

Some practical work setting up Jenkins…

 

Categories
Intro to DevOps Online Courses

Intro to DevOps – Lesson 2

Topics:

  • How to get Dev and ITOps working together
  • Looking at some tools to enable that integration

Started of looking at the basic conflict between ITOps and Dev. In essence the risk adversity of ITOps, why its there and why its good and bad. First hand experience of this with being woken up many nights after a ‘great’ new release makes this quite pertinent.

  • ITOps needs to run systems that are tested and tightly controlled – This means that when a release is coming that requires new or significantly changed components ITOps needs to be included in discussing and aware so they can ensure stability in Production
  • Dev needs to adopts, trial and use new technologies to create software solutions that meet user and business requirements in a effective manner
  • Performance testing needs to be conducted throughout the development iterations and this is impossible if the development environments are significantly different to production
DevOps-ReleaseFixes
These improvements would make remove most of the real world issues we experience when conducting release deployments.

Performance tests:

How can performance tests be conducted throughout the development of new releases, particularly if these release become more regular?

Proposed answer 1 – is a ‘Golden Image’ – A set image that is used for developing, testing and operating services. This includes Apps, Libs and OS. Docker makes this more practical with containers.

Proposed answer 2 – is to apply configuration management to all machines (not sure how practical this could be).

Practical lab:

Installed Virtualbox, Vagrant, git, ssh, Packer.

Vagrant configures VMs, Packer enabled building of ‘golden images’.

Packer template Variables:

  • Builders take source image.
  • Provisions install and configure software within running machine (shell, chef and puppet scripts).
  • Post processors conduct tasks on images output by builders ie, compress (https://www.packer.io/docs/templates/post-processors.html). These post processors can create VMs for AWS, DigitalOcean, Hyper-V, Parallels, QEMU, VirtualBox, VMware

Lab instructions were pretty straight forward. On to lesson 3.

Categories
ITOps

Monolithic to Microservices

Review of ‘AWS re:Invent 2015 | (ARC309) Microservices: Evolving Architecture Patterns in the Cloud’ presentation.

From monolithic to microservices, on AWS.

Screenshot 2015-11-07 22.15.36

Ruby on rails -> Java based functional services.

Screenshot 2015-11-07 22.16.52

Java based functional services resulted in requirement for everything to be loaded into memory – 20 minutes to start services. Still very large services –  with many engineers working on it. This means that commits to those repos take a long time to QA. So a commit is made then start working on something else then a week or two later have to fix what you hardly remember. Then to get specific information out of those big java apps it was required to parse the entire homepage. So…

Screenshot 2015-11-07 22.21.53

Big sql database is still there – This will mean changes with schema change are difficult to do without outages. Included in this stage of micro services was:

  • Team autonomy – give teams a problem and they  can build whatever services they need
  • Voluntary adoption – tools/techniques/processes
  • Goal driven initiatives
  • Failing fast and openly

What are the services? – Email, shipping cost, recommendations, admin, search

Anatomy of a service:

Screenshot 2015-11-07 22.30.51

A service has its own datastore and it completely owns its own datastore. This is where dependency on one big schema is avoided. Services at gilt.com average 2000 lines of code and 32 source files.

Service discovery – enormously simple? – Discovery is a client needs to get to a services, how is it going to get there. ‘It has the name of the service – look up that URL’.

Screenshot 2015-11-07 22.41.14

Use ZooKeeper as a highly available store.

Moving all this to the cloud via ‘lift and shift’ with AWS Direct Connect. In AWS all services where their own ec2 instance and dockerized.

Being a good citizen in a microservices organisation:

  • Service Consumer
    • Design for failure
    • Expect to be throttled
    • Retry with exponential backoff
    • Degrade gracefully
    • Cache when appropriate
  • Service Provider
    • Publish your metrics
      • Throughput, error rate, latency
    • Protect yourself (throttling)
    • Implementation details private
    • Maintain backwards compatability
    • See Amazon API gateway

Again service discovery – simple with naming conventions, DNS and load balancers. Avoid DNS issues with dynamic service registry (ZooKeeper, Eureka, Consul, SmartStack).

Data management – moving away from the schema change problems. And the other problems (custom stored procedures, being stuck with one supplier, single point of failure). Microservices must include decentralisation of datastores, services own their datastores and do not share them. This has a number of benefit from being able to chose whatever data store technology best meets the service needs, make changes without affecting other services, scale the data stores independently. So how do we ensure transactional integrity? Distributed locking sounds horrible. Do all services really need strict transactional integrity? Use queues to retry later.

Aggregation of data – I need to do analytics on my data? AWS Kenesis firehose, Amazon SQS, custom feed.

Continuous Delivery/Deployment  – Introduced some AWS services Code Deploy — or just use Jenkins.

How many services per container/instance – Problems with monitoring granularity, scaling is less granular, ownership is less atomic, continuos deployment is tougher with immutable containers.

I/O Explosion –  Mapping dependencies between services is tough. Some will be popular/hotspots. Service consumers need to cache were they can. Dependency injection is also an option – You can only make a request from services A if you have the required data from service B and C in your request.

Monitoring –  Logs are key, also tracing request through fanned dependency can be much easier with a requirement for a header that is passed on.

Unique failures – Watched a day in the life of a Netflix engineers… good points on failures. We accept and increased failure rate to maintain velocity. We just want to ensure failures are unique. For this to happen we need to have open and safe feedback.

 

Categories
Intro to DevOps

Intro to DevOps – Lesson 1

With all of the emerging technology solutions and paradigms emerging in the IT space, it can be difficult to get a full understanding of everything; particularly before developing biases. So… from the perspective of an infosec and ops guy I will list out some notes on my own review of current direction of devops. This review is based primarily on Udacity – Intro to DevOps and assorted blogs.

Why do it?

Reduce wastage in software development and operation workflows. Simply, more value, less pain.

What is is?

Most of the definitions out there boil down to communication and collaboration between Developers, QA and IT Ops throughout all stages of the development lifecycle.

  • No more passing the release from Dev to IT Ops
  • No more clear boundaries between Dev and IT Ops people/environments/processes and tools
  • No more inconsistency between Dev and Prod environments
  • No more deciding who’s problem bugs are
  • No more 7 day release deployments
  • No more separate tool sets

DevOps-L1_1

Agile development + Continuous Monitoring + Delivery + Automation + Feedback loops =  DevOps?

  • Create shared view on goals, responsibilities, priorities and benefits
  • Learn from failures (feedback mechanisms include devs and operators)
  • Reduce risk and size of changes
  • Drive automation
  • Drive feedback loops
  • Validate ideas as quickly and cheaply (cost + risk) as possible

What DevOps is not:

  • Developers overtaking operations
  • Just tools (though it really is enabled and perhaps dependent on tools)

How do you apply it?

CAMS – Culture, Automation, Measurement and Sharing

  • Culture -> Agile like (People>Process>Tools) + Lean (don’t do what’s not valuable)
  • Automation -> Deployment, Unit Testing, CI -> These come together with DevOps?
  • Measurement -> Infrastructure, usage, release, performance, business metrics, processes, trends
  • Sharing -> Without the functional separation, feedback loops are tighter; particularly between code and operate

What technologies enable it?

Coming in next lesson – good resource for tools – stackshare.io

Other thoughts

Not much so far – looking forward to testing some tools. Particularly how patching and vulnerability management can be applied to docker images.

 

 

Categories
Random

Configuring Snort Rules

Some reading before starting:

Before setting out, getting some basic concepts about snort is important.

This deployment with be in Network Intrusion Detection System (NIDS) mode – which performs detection and analysis on traffic. See other options and nice and concise introduction:  http://manual.snort.org/node3.html.

Rule application order: activation->dynamic->pass->drop->sdrop->reject->alert->log

Again drawing from the snort manual some basic understanding of snort alerts can be found:

    [**] [116:56:1] (snort_decoder): T/TCP Detected [**]

116 –  Generator ID, tells us what component of snort generated the alert

Eliminating false positives

After running pulled pork and using the default snort.conf there will likely be a lot of false positives. Most of these will come from the preprocessor rules. To eliminate false positives there are a few options, to retain maintainability of the rulesets and the ability to use pulled pork, do not edit rule files directly. I use the following steps:

  1. Create an alternate startup configuration for snort and barnyard2 without -D (daemon) and barnyard2 config that only writes to stdout, not the database. – Now we can stop and start snort and barnyard2 quickly to test our rule changes.
  2. Open up the relevant documentation, especially for preprocessor tuning – see the ‘doc’ directory in the snort source.
  3. Have some scripts/traffic replays ready with traffic/attacks you need to be alerting on
  4. Iterate through reading the doc, making changes to snort.conf(for preprocessor config), adding exceptions/suppressions to snort’s threshold.conf or PulledPork’s disablesid, dropsid, enablesid, modifysid confs for pulled pork and running the IDS to check for false positives.

If there are multiple operating systems in your environment, for best results define ipvars to isolate the different OSs. This will ensure you can eliminate false positives whilst maintaining a tight alerting policy.

HttpInspect

From doc: HttpInspect is a generic HTTP decoder for user applications. Given a data buffer, HttpInspect will decode the buffer,  find HTTP fields, and normalize the fields. HttpInspect works on both client requests and server responses.

Global config –

Custom rules

Writing custom rules using snorts lightweight rules description language enables snort to be used for tasks beyond intrusion detection. This example will look at writing a rule to detect Internet Explorer 6 user agents connecting to port 443.

Rule Headers -> [Rule Actions, Protocols, IP Addresses and ports, Direction Operator,

Rule Options -> [content: blah;msg: blah;nocase;HTTP_header;]

Rule Option categories:

  • general – informational only — msg:, reference:, gid:, sid:, rev:, classtype:, priority:, metadata:
  • payload – look for data inside the packet —
    • content: set rules that search for specific content in the packet payload and trigger a response based on that data (Boyer-Moore pattern match). If there is a match anywhere within the packets payload the remainder of the rule option tests are performed (case sensitive). Can contain mixed text and binary data. Binary data is represented as hexdecimal with pipe separators — (content:”|5c 00|P|00|I|00|P|00|E|00 5c|”;). Multiple content rules can be specified in one rule to reduce false positives. Content has a number of modifiers: [nocase, rawbytes, depth, offset, distance, within, http_client_body, http_cookie, http_raw_cookie, http_header, http_raw_header, http_method, http_uri, http_raw_uri, http_stat_code, http_stat_msg, fast_pattern.
  • non-payload – look for non-payload data
  • post-detection – rule specific triggers that are enacted after a rule has been matched
Categories
Random

Validating certificate chains with openssl

Using openssl to verfiy certificate chains is pretty straight forward – see a full script below.

One thing that confused me for a bit was how to specify trust anchors without importing them to the pki config of the os (I also did not want to accept all of the trust anchors).

So.. here what to do for specif trust anchors

# make a directory and copy in all desired trust anchors
# make sure the certs are in pem format, named <bah>.pem
mkdir ~/trustanchors
# create softlinks with hash 
cd ~/trustanchors
for X in ./*.pem;do ln -s $X ./`openssl x509 -hash -noout -in $X`.0;done

# confirm the trust anchor(s) are working as expected
openssl verify -CApath ~/trustanchors -CAfile <some_intermediate>.pem <my_leaf>.pem

So here’s a simple script that will pull the cert chain from a [domain] [port] and let you know if it is invalid – note there will likely be come bugs from characters being encoded / return carriages missing:

#!/bin/bash

# chain_collector.sh [domain] [port]
# output to stdout
# assumes you have a directory with desired trust anchors at ~/trustanchors

if [ $# -ne 2 ]; then
	echo "USAGE: chain_collector.sh [domain] [port]"
	exit 1
fi

TRUSTANCHOR_DIR="~/trustanchors"
SERVER=$1:$2
TFILE="/tmp/$(basename $0).$$.tmp"
OUTPUT_DIR=$1_$2
mkdir $OUTPUT_DIR

openssl s_client -showcerts -servername $1 -connect $SERVER 2>/dev/null > $TFILE
awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "tmpcert." c ".pem"}' < $TFILE 
i=1 
for X in tmpcert.*.pem; do
    if openssl x509 -noout -in $X 2>/dev/null ; then 
        echo "#############################"
        cn=$(openssl x509 -noout -subject -in $X | sed -e 's#.*CN=\(\)#\1#')
	echo CN: $cn
	cp $X $OUTPUT_DIR/${cn// /_}.$((i-1)).pem 
	cert_expiry_date=$(openssl x509 -noout -enddate -in $X \
			| awk -F= ' /notAfter/ { printf("%s\n",$NF); } ')
	seconds_until_expiry=$(echo "$(date --date="$cert_expiry_date" +%s) \ 
                                     - $(date +%s)" |bc)
        days_until_expiry=$(echo "$seconds_until_expiry/(60*60*24)" |bc)
	echo Days until expiry: $days_until_expiry
	echo $(openssl x509 -noout -text -in $X | \ 
                grep -m1 "Signature Algorithm:" | head)
	echo $(openssl x509 -noout -issuer -in $X)
	if [ -a tmpcert.$i.pem ]; then
		echo Parent: $(openssl x509 -noout -subject \ 
                                   -in tmpcert.$i.pem | sed -e 's#.*CN=\(\)#\1#')
	        echo Parent Valid? $(openssl verify -verbose -CAfile tmpcert.$i.pem $X)	
	else
		echo "Parent Valid? This is the trust anchor"
	fi
	echo "#############################"
    fi
    ((i++))
done
rm -f tmpcert.*.pem $TFILE
Categories
Random

SSL Review part 2

RSA in practice

Initializing SSL/TLS with https://youtube.com

In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):

Packet capture download: http://mchost/sourcecode/security_notes/youtube_TLSv1.1_handshake_filtered.pcap

The packet capture starts with the TCP three-way handshake – Frames 1-3

With a TCP connection established the TLS handshake begins, Negotiation phase:

  1. ClientHello – Frame 4 – A random number[90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7], Cipher suites, compression methods and session ticket (if reconnecting session).
  2. ServerHello – Frame 6 – chosen protocol version [TLS 1.1], random number [1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78], CipherSuite [TLS_ECDHE_ECDSA_WITH_RC4_128_SHA], Compression method [null], SessionTicket [null]
  3. Server send certificate message (depending on cipher suite)
  4. Server sends ServerHelloDone
  5. Client responds with ClientKeyExchange containing PreMasterSecret, public key or nothing. (depending on cipher suite) – PreMasterSecret is encrypted using the server public key
  6. Client and server use the random numbers and PreMsterSecret to compute a common secret – master secret
  7. Client sends ChangeCipherSpec record
  8. Client sends authenticated and encrypted Finished – contains a hash and MAC of previous handshake message
  9. Server decrypts the hash and MAC to verify
  10. Server sends ChangeCipherSpec
  11. Server sends Finished – with hash and MAC for verification
  12. Application phase – the handshake is now complete, application protocol enable with content type 23

client random: 90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7 = 10447666340000000000

server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608

Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.

Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/

Categories
Random

nf_conntrack: table full, dropping packet on Nessus server

Issue caused by having iptables rule/s that track connection state. If the number of connections being tracked exceeds the default nf_conntrack table size [65536] then any additional connections will be dropped. Most likely to occur on machines used for NAT and scanning/discovery tools (such as Nessus and Nmap).

Symptoms: Once the connection table is full any additional connection attempts will be blackholed.

 

This issue can be detected using:

$dmesg
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
...

Current conntrack settings can be displayed using:

$sysctl -a | grep conntrack
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmpv6_timeout = 30
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_count = 1
net.netfilter.nf_conntrack_buckets = 16384
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_expect_max = 256
net.ipv6.nf_conntrack_frag6_timeout = 60
net.ipv6.nf_conntrack_frag6_low_thresh = 196608
net.ipv6.nf_conntrack_frag6_high_thresh = 262144
net.nf_conntrack_max = 65536

To check the current number of connections being tracked by conntrack:

/sbin/sysctl net.netfilter.nf_conntrack_count

Options for fixing the issue are:

  1. Stop using stateful connection rules in iptables (probably not an option in most cases)
  2. Increase the size of the connection tracking table (also requires increasing the conntrack hash table)
  3. Decreasing timeout values, reducing how long connection attempts are stored (this is particularly relevant for Nessus scanning machines that can be configured to attempt many simultaneous port scans across an IP range)

 

Making the changes in a persistent fashion RHEL 6 examples:

# 2: Increase number of connections
echo "net.netfilter.nf_conntrack_max = 786432" >> /etc/sysctl.conf
echo "net.netfilter.nf_conntrack_buckets = 196608" >> /etc/sysctl.conf
# Increase number of bucket to change ration from 1:8 to 1:4 (more # memory use but better performance)
echo 'echo "196608" > /sys/module/nf_conntrack/parameters/hashsize' >> /etc/rc.local

# 3: Alter timeout values
# Generic timeout from 10 mins to 1 min
echo "net.netfilter.nf_conntrack_generic_timeout = 60" > /etc/sysctl.conf

# Change unacknowledged timeout to 30 seconds (from 10 mins)
echo "net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30" > /etc/sysctl.conf

# Change established connection timeout to 1 hour (from 10 days)
echo "net.netfilter.nf_conntrack_tcp_timeout_established = 3600" > /etc/sysctl.conf

These changes will persist on reboot.

To apply changes without reboot run the following:

sysctl -p
echo "196608" > /sys/module/nf_conntrack/parameters/hashsize

To review changes:

sysctl -a | grep conntrack

Reference and further reading: http://antmeetspenguin.blogspot.com.au/2011/01/high-performance-linux-router.html

Categories
Random

Setting secure, httpOnly and cache control headers using ModSecurity

Many older web applications do not apply headers/tags that are now considered standard information security practices. For example:

  • Pragma: no-cache
  • Cache-Control: no-cache
  • httpOnly and secure flags

Adding these controls can be achieved using ModSecurity without any needs to modify the application code.

In the case where I needed to modify the cookie headers to include these now controls I added the following to core rule set file: modsecurity_crs_16_session_hijacking.conf.

#
# This rule will identify the outbound Set-Cookie SessionID data and capture it in a setsid
#
#addding httpOnly
Header edit Set-Cookie "(?i)^(JSESSIONID=(?:(?!httponly).)+)$" "$1; httpOnly"
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "0"

 

This adds the cookie controls we were after – Depending on your web application you may need to change ‘JSESSIONID’ to the name of the relevant cookie.

You can find the cookie name simply using browser tools such as Chrome’s Developer Tools (hit F12 in chrome). Load the page you want to check cookies for, click on the Resources tab:

ChromeCookies

After setting the HTTPOnly and Secure flags you can check the effectiveness using the Console table and listing the document cookies… which should now return nothing.

document.cookie