We have a web application that has been running on AWS for several years. As application load balancers and the AWS WAF service was not available, we utilised and external classic ELB point to a pool of EC2 instances running mod_security as our WAF solution. Mod_security was using the OWASP Mod_security core rule set.
Now that Application Load Balancers and AWS WAFs are available, we would like to remove the CPU bottleneck which stems from using EC2 instances with mod security as the current WAF.
Step 1 – Base-lining performance with EC2 WAF solution.
The baseline was completed using https://app.loadimpact.com where we ran 1000 concurrent users, with immediate rampup. On our test with 2 x m5.large EC2 instances as the WAF, the WAFs became CPU pinned within 2mins 30 seconds.
This test was repeated with the EC2 WAFs removed from the chain and we averaged 61ms across the loadimpact test with 1000 users. So – now we need to implement the AWS WAF solution so that can be compared.
Step 2 – Create an ‘equivalent’ rule-set and start using AWS WAF service.
So – getting started with the AWS WAF documentation we read, ‘define your conditions, combine your conditions into rules, and combine the rules into a web ACL.
Conditions: Request strings, source IPs, Country/Geo location of request IP, Length of specified parts of the requests, SQL code (SQL injection), header values (i.e.: User-Agent). Conditions can be multiple values and regex.
Rules: Combinations of conditions along with an ACTION (allow/block/count). There are Regular rules whereby conditions can be and/or chained. Rate-based rules where by the addition of a rate-based condition can be added.
Web ACLs: Whereby the action for rules are defined. Multiple rules can have the same action, thus be grouped in the same ACL. The WAF uses Web ACLs to assess requests against rules in the order which the rules are added to the ACL, whichever/if any rules is matched first defines which action is taken.
Starting simple: To get started I will implement a rate limiting rule which limits 5 requests per minute to our login page from a specified IP along with the basic OWASP rules from terraform code upload by traveloka . Below is our main.tf with the aws_waf_owasp_top_10_rules created for this test.
main.tf which references our newly created aws_waf_owasp_top_10_rules module
ab -v 3 -n 2000 -c 100 https://<my_target.com.au>/login > ab_2000_100_waf_test.log
This command logs request headers (-v 3 for verbosity of output), makes 2000 requests (-n 2000) and conducts those request 100 concurrently (-c 100). I can then see failed requests by tailing the output:
All looks good for the rate limiting based blocking, though it appears that blocking does not occur are exactly 2000 requests in the 5 minute period. It also appears that there is a significant (5-10min) delay on metrics coming through to the WAF stats in the AWS console.
After success on the rate limiting rule, the OWASP Top 10 mitigation rules need to be tested. I will use Owasp Zap to generate some malicious traffic and see when happen!
So it works – which is good, but I am not really confident about the effectiveness of the OWASP rules (as implemented on the AWS WAF). For now, they will do… but some tuning will probably be desirable as all of the requests OWASP ZAP made contained (clearly) malicious content but only 7% (53 / 755) of the requests were blocked by the WAF service. It will be interesting to see if there are false positives (valid requests that are blocked) when I conduct step 4, performance testing.
Step 4 – Conduct performance test using AWS WAF service, and
Conducting a load test with https://app.loadimpact.com demonstrated that the AWS WAF service is highly unlikely to become a bottleneck (though this may differ for other applications and implementations).
Step 5 – Migrate PROD to the AWS WAF service.
Our environment is fully ‘terraformed’, implementing the AWS WAF service as part of our terraform code was working within an hour or so (which is good time for me!).
This week we are deforming objects to (create fancy of irregular objects). Starting with the ‘scale’ tool, we look at how to scale 3d objects, and how holding ‘shift’ ensures that objects are scaled with reference to the center point. If only one side is selected the center point is the center of the 2d side, but we can also triple client with the select tool to select the entire 3d object and scale with reference to the central point of the 3d shape by holding the ctrl key.
Duplication can be achieved with the move tool, ctrl + move, place the new copy where desired then type ‘5x’ for 5 copies. We make some curtains by stringing together curves, making a 2d shape then using the push/pull tool. Duplications, scales and mirrors using scale are all then needed for the curtains. Next we learn the the flip along is more useful than the scale tool for mirroring…
Internal copy arrays were covered next – enabling the duplication of an object to x distance away then using type /4, 3 new object are created at equal spacing between the original and the first copy.
Finally faces and planes were examined. Face have 2 sides, one light (front), one dark (back). Entity information indicates the colors of the front and back faces. Note that light reflection varies with the camera perspective. Change orientation of the plane enables reversing of the orientation of the faces (so that dark and bright effect is controllable). Using right click and ‘orient faces’ can force all faces of an object to be uniform.
The assignment this week was a re-creation of taipei 101.
With almost all of our clients now preferring AWS and Azure for hosting VMs / Docker containers we have to manage a lot of AMIs / VM images. Ensuring that these AMIs are correctly configured, hardened and patched is a critical requirement. To do this time and cost effectively, we use packer and ansible. There are solutions such as Amazon’s ECS that extend the boundary of the opaque cloud all the way to the containers, which has a number of benefits but does not currently meet a number of non-functional requirements for most of our clients. If those non-functional requirements we gone, or met by something like AWS ECS, it would be hard to argue against simply using terraform and ecs – removing our responsibility for managing the docker host VMs.
Anyway, we are making some updates to our IaaS code base which includes a number of new requirements and code changes to our packer and ansible code. To make these changes correctly and quickly I need a build/test cycle that is as short as possible (shorted than spinning up a new EC2 instance). Fortunately, one of the benefits of packer is the ‘cloud agnosticism’… so theoretically I should be able to test 99% of my new packer and ansible code on my windows 10 laptop using packer’s Hyper-V Builder.
I am running Windows 10 Pro on a Dell XPS 15 9560. VirtualBox is the most common go-to option for local vm testing but thats not an option if you are already running Hyper-V (which I am). So to get things started we need to:
Have a git solution for windows – I am using Microsoft’s VS Code (which is really a great opensource tool from M$)
Install Hyper-V Linux Integration Services on the Centos 7 base VM (this is required for Packer to be able to determine new VMs’ IP addresses) – if you are stuck with packer failing to connect with SSH to the VM and you are using a Hyper-V switch this will most likely be the issue
Add a Hyper-V builder to our packer.json (as below)
Unfortunately, as is common with these online course, I go distracted and was late on week 4. Luckily it was a pretty light week, working through:
Follow me tool
Working with spheres
The assignment was creating a bike wheel with tread on the tyre. Getting the tread right was a bit finicky and since I created a circle with ‘too many sides’ rendering was very slow on my dell xps 15 9560.
Will work on getting back ahead of the schedule for week 5…
Looks primarily at changing object shapes, introducing the move too and the 2-point arch tool. Using double click for repetition of push/pull tool also proved to be convenient. We then used the move tool to alter slopes of surfaces, including using the up key to match slope and then height of another surface.
Next up is the arc tool, which has 4 variants:
Arc – Main point of this method determines where the center point of the arc will be
2 Point Arc – select two points that will be the width of the arc
3 Point Arc – Firts 2 points determine form, and the third point gives that exact length Ideal for irregularly shaped objects
Again the first pass took a while and was quite difficult, but a complete redraw took only 5 mins. When drawing structures like this, with eves and and sloped roofs it is important to complete a room (minus the eves and roof thickness) to make slope matching easier.
I want to make a model for a landscaping project in my garden. After testing a few different tools (sketchup, autocad, fusion 360 and LibreCAD) I realised that using these tools is not intuitive for me… So onto Corsera to do some learning!
My chosen initial course, 3D CAD Fundamental, is for complete novices to 3D modelling/Computer Aided Designed. There are follow up courses with some more extensive examples:
This, fundamental CAD course uses SketchUp Make 2017 as the CAD software. We are using ‘Construction Documentation – Meters’ template.
Week 1 is just set up of software and takes about 5 minutes.
Week 2 has a few worked through examples to get you using tools. I started this yesterday and it took my 30 minutes to draw a simple cube with some steps. The lesson introduced the following tools:
Tape measure tool + Guidlines
Also critical were some tidbits on what mouse icons mean, how to draw lines based on x,y,z axes (wow, axes is the plural of axis ?!), midpoints and typing numbers while drawing to be exact.
Magic Cube module, using line select line tool (click once, move to draw line, stick to axis to make it straight and type on the keypad the distance desired). Then using divide lines to build a stepped cube. Guidelines were also introduced along with the rectangle, pull and push tool.
From how difficult the Magic Cube module was, I saw the week 2 assignment and thought there was no way I could do it in less than 2 hours… but after failing for about 30 minutes, things become a lot easier. I guess getting used to perspective and managing the camera view helps a lot. Anyway I was very happy to complete my first 3D model!
The ongoing pop quiz and extensive quiz/test at the end of each lesson seems to be a very effective method for holding attention and retaining more information from the lesson, surely more effective that a non-interactive lecture!
A lot of people need to do offsite backups for AWS RDS – which can be done trivially within AWS. If you need offsite backups to protect you against things like AWS account breach or AWS specific issues – offsite backups must include diversification of suppliers.
I am going to use Amazon’s Data Migration service to replicate AWS RDS data to a VM running in Azure and set up snapshots/backups of the Azure hosts.
The steps I used to do this are:
Set up an Azure Windows 2016 VM
Create an IPSec tunnel between the Azure Windows 2016 VM and my AWS Native VPN
Install matching version of Oracle on the Windows 2016 VM
Configure Data Migration service
Create a data migration and continuous replication task
Snapshots/Backups and Monitoring
Debug and Gotchyas
1,2 – Set up Azure Windows 2016 VM and IPSec tunnel
Create Network on Azure and place a VM in the network with 2 interfaces. One interface must have an public IP, call this one ‘external’ and the other inteface will be called ‘internal’ – Once you have the public IP address of your Windows 2016 VM, create a ‘Customer Gateway’ in your AWS VPC pointing to that IP. You will also need a ‘Virual Private Gateway’ configured for that VPC. Then create a ‘Site-to-Site VPN connection’ in your VPC (it won’t connect for now but create it anyway). Configure your Azure Win 2016 VM to make an IPSec tunnel by following these instructions (The instructions are for 2012 R2 but the only tiny difference is some menu items): https://docs.aws.amazon.com/vpc/latest/adminguide/customer-gateway-windows-2012.html#cgw-win2012-download-config. Once this is completed both your AWS site-to-site connection and your Azure VM are trying to connect to each other. Ensure that the Azure VM has its security groups configured to allow your AWS site-to-site vpn to get to the Azure VM (I am not sure which ports and protocols specifically, I just white-listed all traffic from the two AWS tunnel end points. Once this is done it took around 5 mins for the tunnel to come up (I was checking the status via the AWS Console), I also found that it requires traffic to be flowing over the link, so I was running a ping -t <aws_internal_ip> from my Azure VM. Also note that you will need to add routes to your applicable AWS route tables and update AWS security groups for the Azure subnet as required.
3 – Install matching version of Oracle on the Windows 2016 VM
4,5 – Configure Data Migration service and migration/replication
Log into your AWS console and go to ‘Data Migration Service’ / ‘DMS’ and hit get started. You will need to set up a replication VM (well atleast pick a size, security group, type etc). Note that the security group that you add the replication host to must have access to both your RDS and your Azure DBs – I could not pick which subnet the host went into so I had to add routes for a couple more subnets that expected. Next you will need to add your source and target databases. When you add in the details and hit test the wizard will confirm connectivity to both databases. I ran into issue on both of these points because of not adding the correct security groups, the windows firewall on the Azure VM and my VPN link dropping due to no traffic (I am still investigating a fix better than ping -t for this). Next you will be creating a migration/replication task, if you are going to be doing ongoing replication you need to run the following on your Oracle RDS db:
You can filter by schema, which should provide you with a drop down box to select which schema/s. Ensure that you enable logging on the migration/replication task (if you get errors, which I did the first couple of attempts, you won’t be fixing anything without the logs.
6 – Snapshots and Monitoring
For my requirements, daily snapshots/backups of the Azure VM will provide sufficient coverage. The Backup vault must be upgraded to v2 if you are using a Standrd SSD disk on the Azure VM, see: https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade . To enable email notifications for Azure backups, go to the azure portal, select the applicable vault, click on ‘view alerts’ -> ‘Configure notifications’ -> enter an email address and check ‘critical’ (or what type of email notifications you want. Other recommended monitoring checks include: ping for VPN connectivity, status check of DMS task (using aws cli), SQL query on destination database confirming latest timestamp of a table that should have regular updates.
7 – Debug and Gotchyas
Azure security group allowing AWS vpn tunnel endpoint to Azure VM
Windows firewall rule on VM allowing Oracle traffic (default port 1521) from AWS RDS private subnet
Route tables on AWS subnets to route traffic to your Azure subnet via the Virtual Private Network
Security groups on AWS to allow traffic from Azure subnet
Stability of the AWS <–> Azure VM site-to-site tunnel requires constant traffic
The DMS replication host seems to go into an arbitrary subnet of your VPC (there probably some default setting I didn’t see) but check this and ensure it has routes for the Azure site-to-site
Ensure the RDS Oracle database has the archive log retention and supplemental logs settings as per steps 4,5.