Python script for downloading snapshots of all file in your google drive, including those shared with you.
source: https://github.com/SecurityShift/tools/blob/master/backup_scripts/google_drive_backup.py
Python script for downloading snapshots of all file in your google drive, including those shared with you.
source: https://github.com/SecurityShift/tools/blob/master/backup_scripts/google_drive_backup.py
Topics:
At minimum there will be 6 environments
If the hand over of each of these steps was manuals there are too many opportunities for delays and errors. Sooo:
Some practical work setting up Jenkins…
Topics:
Started of looking at the basic conflict between ITOps and Dev. In essence the risk adversity of ITOps, why its there and why its good and bad. First hand experience of this with being woken up many nights after a ‘great’ new release makes this quite pertinent.
Performance tests:
How can performance tests be conducted throughout the development of new releases, particularly if these release become more regular?
Proposed answer 1 – is a ‘Golden Image’ – A set image that is used for developing, testing and operating services. This includes Apps, Libs and OS. Docker makes this more practical with containers.
Proposed answer 2 – is to apply configuration management to all machines (not sure how practical this could be).
Practical lab:
Installed Virtualbox, Vagrant, git, ssh, Packer.
Vagrant configures VMs, Packer enabled building of ‘golden images’.
Packer template Variables:
Lab instructions were pretty straight forward. On to lesson 3.
Review of ‘AWS re:Invent 2015 | (ARC309) Microservices: Evolving Architecture Patterns in the Cloud’ presentation.
From monolithic to microservices, on AWS.
Ruby on rails -> Java based functional services.
Java based functional services resulted in requirement for everything to be loaded into memory – 20 minutes to start services. Still very large services – with many engineers working on it. This means that commits to those repos take a long time to QA. So a commit is made then start working on something else then a week or two later have to fix what you hardly remember. Then to get specific information out of those big java apps it was required to parse the entire homepage. So…
Big sql database is still there – This will mean changes with schema change are difficult to do without outages. Included in this stage of micro services was:
What are the services? – Email, shipping cost, recommendations, admin, search
Anatomy of a service:
A service has its own datastore and it completely owns its own datastore. This is where dependency on one big schema is avoided. Services at gilt.com average 2000 lines of code and 32 source files.
Service discovery – enormously simple? – Discovery is a client needs to get to a services, how is it going to get there. ‘It has the name of the service – look up that URL’.
Use ZooKeeper as a highly available store.
Moving all this to the cloud via ‘lift and shift’ with AWS Direct Connect. In AWS all services where their own ec2 instance and dockerized.
Being a good citizen in a microservices organisation:
Again service discovery – simple with naming conventions, DNS and load balancers. Avoid DNS issues with dynamic service registry (ZooKeeper, Eureka, Consul, SmartStack).
Data management – moving away from the schema change problems. And the other problems (custom stored procedures, being stuck with one supplier, single point of failure). Microservices must include decentralisation of datastores, services own their datastores and do not share them. This has a number of benefit from being able to chose whatever data store technology best meets the service needs, make changes without affecting other services, scale the data stores independently. So how do we ensure transactional integrity? Distributed locking sounds horrible. Do all services really need strict transactional integrity? Use queues to retry later.
Aggregation of data – I need to do analytics on my data? AWS Kenesis firehose, Amazon SQS, custom feed.
Continuous Delivery/Deployment – Introduced some AWS services Code Deploy — or just use Jenkins.
How many services per container/instance – Problems with monitoring granularity, scaling is less granular, ownership is less atomic, continuos deployment is tougher with immutable containers.
I/O Explosion – Mapping dependencies between services is tough. Some will be popular/hotspots. Service consumers need to cache were they can. Dependency injection is also an option – You can only make a request from services A if you have the required data from service B and C in your request.
Monitoring – Logs are key, also tracing request through fanned dependency can be much easier with a requirement for a header that is passed on.
Unique failures – Watched a day in the life of a Netflix engineers… good points on failures. We accept and increased failure rate to maintain velocity. We just want to ensure failures are unique. For this to happen we need to have open and safe feedback.
With all of the emerging technology solutions and paradigms emerging in the IT space, it can be difficult to get a full understanding of everything; particularly before developing biases. So… from the perspective of an infosec and ops guy I will list out some notes on my own review of current direction of devops. This review is based primarily on Udacity – Intro to DevOps and assorted blogs.
Why do it?
Reduce wastage in software development and operation workflows. Simply, more value, less pain.
What is is?
Most of the definitions out there boil down to communication and collaboration between Developers, QA and IT Ops throughout all stages of the development lifecycle.
Agile development + Continuous Monitoring + Delivery + Automation + Feedback loops = DevOps?
What DevOps is not:
How do you apply it?
CAMS – Culture, Automation, Measurement and Sharing
What technologies enable it?
Coming in next lesson – good resource for tools – stackshare.io
Other thoughts
Not much so far – looking forward to testing some tools. Particularly how patching and vulnerability management can be applied to docker images.
Before setting out, getting some basic concepts about snort is important.
This deployment with be in Network Intrusion Detection System (NIDS) mode – which performs detection and analysis on traffic. See other options and nice and concise introduction: http://manual.snort.org/node3.html.
Rule application order: activation->dynamic->pass->drop->sdrop->reject->alert->log
Again drawing from the snort manual some basic understanding of snort alerts can be found:
[**] [116:56:1] (snort_decoder): T/TCP Detected [**]
116 – Generator ID, tells us what component of snort generated the alert
After running pulled pork and using the default snort.conf there will likely be a lot of false positives. Most of these will come from the preprocessor rules. To eliminate false positives there are a few options, to retain maintainability of the rulesets and the ability to use pulled pork, do not edit rule files directly. I use the following steps:
If there are multiple operating systems in your environment, for best results define ipvars to isolate the different OSs. This will ensure you can eliminate false positives whilst maintaining a tight alerting policy.
From doc: HttpInspect is a generic HTTP decoder for user applications. Given a data buffer, HttpInspect will decode the buffer, find HTTP fields, and normalize the fields. HttpInspect works on both client requests and server responses.
Global config –
Writing custom rules using snorts lightweight rules description language enables snort to be used for tasks beyond intrusion detection. This example will look at writing a rule to detect Internet Explorer 6 user agents connecting to port 443.
Rule Headers -> [Rule Actions, Protocols, IP Addresses and ports, Direction Operator,
Rule Options -> [content: blah;msg: blah;nocase;HTTP_header;]
Rule Option categories:
Using openssl to verfiy certificate chains is pretty straight forward – see a full script below.
One thing that confused me for a bit was how to specify trust anchors without importing them to the pki config of the os (I also did not want to accept all of the trust anchors).
So.. here what to do for specif trust anchors
# make a directory and copy in all desired trust anchors # make sure the certs are in pem format, named <bah>.pem mkdir ~/trustanchors # create softlinks with hash cd ~/trustanchors for X in ./*.pem;do ln -s $X ./`openssl x509 -hash -noout -in $X`.0;done # confirm the trust anchor(s) are working as expected openssl verify -CApath ~/trustanchors -CAfile <some_intermediate>.pem <my_leaf>.pem
So here’s a simple script that will pull the cert chain from a [domain] [port] and let you know if it is invalid – note there will likely be come bugs from characters being encoded / return carriages missing:
#!/bin/bash # chain_collector.sh [domain] [port] # output to stdout # assumes you have a directory with desired trust anchors at ~/trustanchors if [ $# -ne 2 ]; then echo "USAGE: chain_collector.sh [domain] [port]" exit 1 fi TRUSTANCHOR_DIR="~/trustanchors" SERVER=$1:$2 TFILE="/tmp/$(basename $0).$$.tmp" OUTPUT_DIR=$1_$2 mkdir $OUTPUT_DIR openssl s_client -showcerts -servername $1 -connect $SERVER 2>/dev/null > $TFILE awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "tmpcert." c ".pem"}' < $TFILE i=1 for X in tmpcert.*.pem; do if openssl x509 -noout -in $X 2>/dev/null ; then echo "#############################" cn=$(openssl x509 -noout -subject -in $X | sed -e 's#.*CN=\(\)#\1#') echo CN: $cn cp $X $OUTPUT_DIR/${cn// /_}.$((i-1)).pem cert_expiry_date=$(openssl x509 -noout -enddate -in $X \ | awk -F= ' /notAfter/ { printf("%s\n",$NF); } ') seconds_until_expiry=$(echo "$(date --date="$cert_expiry_date" +%s) \ - $(date +%s)" |bc) days_until_expiry=$(echo "$seconds_until_expiry/(60*60*24)" |bc) echo Days until expiry: $days_until_expiry echo $(openssl x509 -noout -text -in $X | \ grep -m1 "Signature Algorithm:" | head) echo $(openssl x509 -noout -issuer -in $X) if [ -a tmpcert.$i.pem ]; then echo Parent: $(openssl x509 -noout -subject \ -in tmpcert.$i.pem | sed -e 's#.*CN=\(\)#\1#') echo Parent Valid? $(openssl verify -verbose -CAfile tmpcert.$i.pem $X) else echo "Parent Valid? This is the trust anchor" fi echo "#############################" fi ((i++)) done rm -f tmpcert.*.pem $TFILE
Initializing SSL/TLS with https://youtube.com
In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):
Packet capture download: http://mchost/sourcecode/security_notes/youtube_TLSv1.1_handshake_filtered.pcap
The packet capture starts with the TCP three-way handshake – Frames 1-3
With a TCP connection established the TLS handshake begins, Negotiation phase:
client random: 90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7 = 10447666340000000000
server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608
Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.
Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.
Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/
Issue caused by having iptables rule/s that track connection state. If the number of connections being tracked exceeds the default nf_conntrack table size [65536] then any additional connections will be dropped. Most likely to occur on machines used for NAT and scanning/discovery tools (such as Nessus and Nmap).
Symptoms: Once the connection table is full any additional connection attempts will be blackholed.
This issue can be detected using:
$dmesg nf_conntrack: table full, dropping packet. nf_conntrack: table full, dropping packet. nf_conntrack: table full, dropping packet. nf_conntrack: table full, dropping packet. ...
Current conntrack settings can be displayed using:
$sysctl -a | grep conntrack net.netfilter.nf_conntrack_generic_timeout = 600 net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120 net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60 net.netfilter.nf_conntrack_tcp_timeout_established = 432000 net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120 net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30 net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120 net.netfilter.nf_conntrack_tcp_timeout_close = 10 net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300 net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300 net.netfilter.nf_conntrack_tcp_loose = 1 net.netfilter.nf_conntrack_tcp_be_liberal = 0 net.netfilter.nf_conntrack_tcp_max_retrans = 3 net.netfilter.nf_conntrack_udp_timeout = 30 net.netfilter.nf_conntrack_udp_timeout_stream = 180 net.netfilter.nf_conntrack_icmpv6_timeout = 30 net.netfilter.nf_conntrack_icmp_timeout = 30 net.netfilter.nf_conntrack_acct = 0 net.netfilter.nf_conntrack_events = 1 net.netfilter.nf_conntrack_events_retry_timeout = 15 net.netfilter.nf_conntrack_max = 65536 net.netfilter.nf_conntrack_count = 1 net.netfilter.nf_conntrack_buckets = 16384 net.netfilter.nf_conntrack_checksum = 1 net.netfilter.nf_conntrack_log_invalid = 0 net.netfilter.nf_conntrack_expect_max = 256 net.ipv6.nf_conntrack_frag6_timeout = 60 net.ipv6.nf_conntrack_frag6_low_thresh = 196608 net.ipv6.nf_conntrack_frag6_high_thresh = 262144 net.nf_conntrack_max = 65536
To check the current number of connections being tracked by conntrack:
/sbin/sysctl net.netfilter.nf_conntrack_count
Options for fixing the issue are:
Making the changes in a persistent fashion RHEL 6 examples:
# 2: Increase number of connections echo "net.netfilter.nf_conntrack_max = 786432" >> /etc/sysctl.conf echo "net.netfilter.nf_conntrack_buckets = 196608" >> /etc/sysctl.conf # Increase number of bucket to change ration from 1:8 to 1:4 (more # memory use but better performance) echo 'echo "196608" > /sys/module/nf_conntrack/parameters/hashsize' >> /etc/rc.local # 3: Alter timeout values # Generic timeout from 10 mins to 1 min echo "net.netfilter.nf_conntrack_generic_timeout = 60" > /etc/sysctl.conf # Change unacknowledged timeout to 30 seconds (from 10 mins) echo "net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30" > /etc/sysctl.conf # Change established connection timeout to 1 hour (from 10 days) echo "net.netfilter.nf_conntrack_tcp_timeout_established = 3600" > /etc/sysctl.conf
These changes will persist on reboot.
To apply changes without reboot run the following:
sysctl -p echo "196608" > /sys/module/nf_conntrack/parameters/hashsize
To review changes:
sysctl -a | grep conntrack
Reference and further reading: http://antmeetspenguin.blogspot.com.au/2011/01/high-performance-linux-router.html
Many older web applications do not apply headers/tags that are now considered standard information security practices. For example:
Adding these controls can be achieved using ModSecurity without any needs to modify the application code.
In the case where I needed to modify the cookie headers to include these now controls I added the following to core rule set file: modsecurity_crs_16_session_hijacking.conf.
# # This rule will identify the outbound Set-Cookie SessionID data and capture it in a setsid # #addding httpOnly Header edit Set-Cookie "(?i)^(JSESSIONID=(?:(?!httponly).)+)$" "$1; httpOnly" Header set Cache-Control "no-cache, no-store, must-revalidate" Header set Pragma "no-cache" Header set Expires "0"
This adds the cookie controls we were after – Depending on your web application you may need to change ‘JSESSIONID’ to the name of the relevant cookie.
You can find the cookie name simply using browser tools such as Chrome’s Developer Tools (hit F12 in chrome). Load the page you want to check cookies for, click on the Resources tab:
After setting the HTTPOnly and Secure flags you can check the effectiveness using the Console table and listing the document cookies… which should now return nothing.