Step 2: Use SQL Developer’s Database Export wizard to export the view to your desired file format
In SQLDeveloper, Tools → Database Export
Select the correct DB connection
Uncheck ‘Export DDL’
Under ‘Export Data’ change Format to CSV or XLSX (or whatever file type is desired)
Adjust file names and output dir as desired, click Next
Uncheck all except ‘Views’, Next
Ensure that you select the correct schema selected, if the schema is not the default schema for your user, click ‘more’ – select the correct schema and change type from ‘All Objects’ to ‘View’
Click ‘Lookup’ and you will see the view you created in Step 1
Select the View and hit the blue arrow to move the view into the lower box, then click next, review and Finish.. your export will now run with a status box for the task.
As I could not find an updated version of these for the new release I have made one. The repo for this, 2019 community version (specifically c2.4.1) can be found here: https://github.com/markz0r/eramba-community-docker
Follow the steps in README.md and you should be testing the new eramba in no time.
Mar, 2020: Updated for community edition 2.8.1
Thanks to the team at Eramba for making the tool available for all!
We have a web application that has been running on AWS for several years. As application load balancers and the AWS WAF service was not available, we utilised and external classic ELB point to a pool of EC2 instances running mod_security as our WAF solution. Mod_security was using the OWASP Mod_security core rule set.
Now that Application Load Balancers and AWS WAFs are available, we would like to remove the CPU bottleneck which stems from using EC2 instances with mod security as the current WAF.
Step 1 – Base-lining performance with EC2 WAF solution.
The baseline was completed using https://app.loadimpact.com where we ran 1000 concurrent users, with immediate rampup. On our test with 2 x m5.large EC2 instances as the WAF, the WAFs became CPU pinned within 2mins 30 seconds.
This test was repeated with the EC2 WAFs removed from the chain and we averaged 61ms across the loadimpact test with 1000 users. So – now we need to implement the AWS WAF solution so that can be compared.
Step 2 – Create an ‘equivalent’ rule-set and start using AWS WAF service.
We used terraform for this environment so the CloudFormation web ACL and rules are not being used and I will start be testing out the terraform code upload by traveloka. After having a look at the code in more detail I decided I need to get a better understanding of the terraform modules (and the AWS service) so I will write some terraform code from scratch.
So – getting started with the AWS WAF documentation we read, ‘define your conditions, combine your conditions into rules, and combine the rules into a web ACL.
Conditions: Request strings, source IPs, Country/Geo location of request IP, Length of specified parts of the requests, SQL code (SQL injection), header values (i.e.: User-Agent). Conditions can be multiple values and regex.
Rules: Combinations of conditions along with an ACTION (allow/block/count). There are Regular rules whereby conditions can be and/or chained. Rate-based rules where by the addition of a rate-based condition can be added.
Web ACLs: Whereby the action for rules are defined. Multiple rules can have the same action, thus be grouped in the same ACL. The WAF uses Web ACLs to assess requests against rules in the order which the rules are added to the ACL, whichever/if any rules is matched first defines which action is taken.
Starting simple: To get started I will implement a rate limiting rule which limits 5 requests per minute to our login page from a specified IP along with the basic OWASP rules from terraform code upload by traveloka . Below is our main.tf with the aws_waf_owasp_top_10_rules created for this test.
main.tf which references our newly created aws_waf_owasp_top_10_rules module
ab -v 3 -n 2000 -c 100 https://<my_target.com.au>/login > ab_2000_100_waf_test.log
This command logs request headers (-v 3 for verbosity of output), makes 2000 requests (-n 2000) and conducts those request 100 concurrently (-c 100). I can then see failed requests by tailing the output:
All looks good for the rate limiting based blocking, though it appears that blocking does not occur are exactly 2000 requests in the 5 minute period. It also appears that there is a significant (5-10min) delay on metrics coming through to the WAF stats in the AWS console.
AWS console about 10 mins after running the HTTP AB tool we can see blocks
After success on the rate limiting rule, the OWASP Top 10 mitigation rules need to be tested. I will use Owasp Zap to generate some malicious traffic and see when happen!
So it works – which is good, but I am not really confident about the effectiveness of the OWASP rules (as implemented on the AWS WAF). For now, they will do… but some tuning will probably be desirable as all of the requests OWASP ZAP made contained (clearly) malicious content but only 7% (53 / 755) of the requests were blocked by the WAF service. It will be interesting to see if there are false positives (valid requests that are blocked) when I conduct step 4, performance testing.
Step 4 – Conduct performance test using AWS WAF service, and
Conducting a load test with https://app.loadimpact.com demonstrated that the AWS WAF service is highly unlikely to become a bottleneck (though this may differ for other applications and implementations).
Step 5 – Migrate PROD to the AWS WAF service.
Our environment is fully ‘terraformed’, implementing the AWS WAF service as part of our terraform code was working within an hour or so (which is good time for me!).
With almost all of our clients now preferring AWS and Azure for hosting VMs / Docker containers we have to manage a lot of AMIs / VM images. Ensuring that these AMIs are correctly configured, hardened and patched is a critical requirement. To do this time and cost effectively, we use packer and ansible. There are solutions such as Amazon’s ECS that extend the boundary of the opaque cloud all the way to the containers, which has a number of benefits but does not currently meet a number of non-functional requirements for most of our clients. If those non-functional requirements we gone, or met by something like AWS ECS, it would be hard to argue against simply using terraform and ecs – removing our responsibility for managing the docker host VMs.
Anyway, we are making some updates to our IaaS code base which includes a number of new requirements and code changes to our packer and ansible code. To make these changes correctly and quickly I need a build/test cycle that is as short as possible (shorted than spinning up a new EC2 instance). Fortunately, one of the benefits of packer is the ‘cloud agnosticism’… so theoretically I should be able to test 99% of my new packer and ansible code on my windows 10 laptop using packer’s Hyper-V Builder.
Setting up
I am running Windows 10 Pro on a Dell XPS 15 9560. VirtualBox is the most common go-to option for local vm testing but thats not an option if you are already running Hyper-V (which I am). So to get things started we need to:
Have a git solution for windows – I am using Microsoft’s VS Code (which is really a great opensource tool from M$)
Install packer for windows, ensuring the executable is in the Windows PATH
Create VM in Hyper-V to act as a base template (I am using Centos 7 minimal as we use https://www.centos.org/download/CentOs AMIs on AWS)
Install Hyper-V Linux Integration Services on the Centos 7 base VM (this is required for Packer to be able to determine new VMs’ IP addresses) – if you are stuck with packer failing to connect with SSH to the VM and you are using a Hyper-V switch this will most likely be the issue
Add a Hyper-V builder to our packer.json (as below)
Now, assuming the packer and ansible code is in a funcitonal state, I can build a new VM and run packer + ansible via powershell (run with administrative privileges) with:
A lot of people need to do offsite backups for AWS RDS – which can be done trivially within AWS. If you need offsite backups to protect you against things like AWS account breach or AWS specific issues – offsite backups must include diversification of suppliers.
I am going to use Amazon’s Data Migration service to replicate AWS RDS data to a VM running in Azure and set up snapshots/backups of the Azure hosts.
The new (2018) AWS Data Migration Service solve offisite RDS backup problems
The steps I used to do this are:
Set up an Azure Windows 2016 VM
Create an IPSec tunnel between the Azure Windows 2016 VM and my AWS Native VPN
Install matching version of Oracle on the Windows 2016 VM
Configure Data Migration service
Create a data migration and continuous replication task
Snapshots/Backups and Monitoring
Debug and Gotchyas
1,2 – Set up Azure Windows 2016 VM and IPSec tunnel
Create Network on Azure and place a VM in the network with 2 interfaces. One interface must have an public IP, call this one ‘external’ and the other inteface will be called ‘internal’ – Once you have the public IP address of your Windows 2016 VM, create a ‘Customer Gateway’ in your AWS VPC pointing to that IP. You will also need a ‘Virual Private Gateway’ configured for that VPC. Then create a ‘Site-to-Site VPN connection’ in your VPC (it won’t connect for now but create it anyway). Configure your Azure Win 2016 VM to make an IPSec tunnel by following these instructions (The instructions are for 2012 R2 but the only tiny difference is some menu items): https://docs.aws.amazon.com/vpc/latest/adminguide/customer-gateway-windows-2012.html#cgw-win2012-download-config. Once this is completed both your AWS site-to-site connection and your Azure VM are trying to connect to each other. Ensure that the Azure VM has its security groups configured to allow your AWS site-to-site vpn to get to the Azure VM (I am not sure which ports and protocols specifically, I just white-listed all traffic from the two AWS tunnel end points. Once this is done it took around 5 mins for the tunnel to come up (I was checking the status via the AWS Console), I also found that it requires traffic to be flowing over the link, so I was running a ping -t <aws_internal_ip> from my Azure VM. Also note that you will need to add routes to your applicable AWS route tables and update AWS security groups for the Azure subnet as required.
3 – Install matching version of Oracle on the Windows 2016 VM
4,5 – Configure Data Migration service and migration/replication
Log into your AWS console and go to ‘Data Migration Service’ / ‘DMS’ and hit get started. You will need to set up a replication VM (well atleast pick a size, security group, type etc). Note that the security group that you add the replication host to must have access to both your RDS and your Azure DBs – I could not pick which subnet the host went into so I had to add routes for a couple more subnets that expected. Next you will need to add your source and target databases. When you add in the details and hit test the wizard will confirm connectivity to both databases. I ran into issue on both of these points because of not adding the correct security groups, the windows firewall on the Azure VM and my VPN link dropping due to no traffic (I am still investigating a fix better than ping -t for this). Next you will be creating a migration/replication task, if you are going to be doing ongoing replication you need to run the following on your Oracle RDS db:
You can filter by schema, which should provide you with a drop down box to select which schema/s. Ensure that you enable logging on the migration/replication task (if you get errors, which I did the first couple of attempts, you won’t be fixing anything without the logs.
6 – Snapshots and Monitoring
For my requirements, daily snapshots/backups of the Azure VM will provide sufficient coverage. The Backup vault must be upgraded to v2 if you are using a Standrd SSD disk on the Azure VM, see: https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade . To enable email notifications for Azure backups, go to the azure portal, select the applicable vault, click on ‘view alerts’ -> ‘Configure notifications’ -> enter an email address and check ‘critical’ (or what type of email notifications you want. Other recommended monitoring checks include: ping for VPN connectivity, status check of DMS task (using aws cli), SQL query on destination database confirming latest timestamp of a table that should have regular updates.
7 – Debug and Gotchyas
Azure security group allowing AWS vpn tunnel endpoint to Azure VM
Windows firewall rule on VM allowing Oracle traffic (default port 1521) from AWS RDS private subnet
Route tables on AWS subnets to route traffic to your Azure subnet via the Virtual Private Network
Security groups on AWS to allow traffic from Azure subnet
Stability of the AWS <–> Azure VM site-to-site tunnel requires constant traffic
The DMS replication host seems to go into an arbitrary subnet of your VPC (there probably some default setting I didn’t see) but check this and ensure it has routes for the Azure site-to-site
Ensure the RDS Oracle database has the archive log retention and supplemental logs settings as per steps 4,5.
It has been a long while since I looked at RDS – with Azure, Office 365 and Server 2016 there seems to be a lot of new (or better) options. To get across some of the options I have decided to do a review of Microsoft’s documentation with the aim of deciding on a solution for a client. The specific scenario I am looking at is a client with low spec workstations, using Office 365 Business Premium (including OneDrive), Windows 10 and have a single Windows 2016 Virtual Private Server.
Some desired features:
Users should be able to use their workstations or the remote desktop server interchangeably
Everything done on workstations should be replicated to the RDS server and visa-versa
Contention on editing documents should be dealt with reasonably
The credential for signing into workstations, email and remote services should be the same (ideally with a 2FA option for RDS)
Issues faced:
The Office 365 users were created several months before the RDS server was deployed
The Azure AD connect service which synchronizes users in an Active Directory deployment with Office 365 user (Azure AD) is a one way street, assuming the ‘on-prem’ active directory object exist already and only need to be create in Azure AD (Office 365) – see the work around for this here
Office 365 licensing for ‘shared’ computers means that Office 365 Business Premium users can’t use a VPS – so entrerprise plans of business plus must be used.
How to configure Remote Desktop Services? After getting Active Directory installed and configure to sync with Azure AD I now need choose and implement the RDS configuration.
Starting with the Microsoft Doc we have the following options:
Session-based virtualization – Many users per host
VDI – Virtual machine for each user — note that if your server is already a VM this isnt really an option (nested VMs are not ideal)
Based on our clients situation – session-based make much more sense for now. Next up – what are we going to publish to the users logging into remote desktop service?
Desktops – Providing users with the full desktop experience
RemoteApps – Users run apps that seem to be running locally but are in fact being served via RDS
Desktops makes sense for now. So – how do we set up a Session-based desktop for remote access by multiple users?
Create an AD service and link it to office 365 with Azure AD Connect
As Microsoft says:
You still must have an internet-facing server to utilize RD Web Access and RD Gateway for external users
You still must have an Active Directory and–for highly available environments–a SQL database to house user and Remote Desktop properties
You still must have communication access between the RD infrastructure roles (RD Connection Broker, RD Gateway, RD Licensing, and RD Web Access) and the end RDSH or RDVH hosts to be able to connect end-users to their desktops or applications.
After setting all of this up I am very happy with the results. The single source of truth for user must be the ‘on-prem’ AD. Syncing an on-prem AD service to Office 365 is almost seemless with some miner tweak required that are fairly easy to find with some googling.
Importing users from Office 365 to an on Prem-AD can be required in cases where an organisation who has been using Office 365 wants to start using a Remote Desktop Service or alike. To reduce the number of passwords and provide single sign on (or at least same sign on) the Windows Server my have Azure AD connect installed and be syncing with the businesses Office 365 account. The problem is that out of the box Azure AD connect is a one way street. It only creates object on the Azure side – it does not import Office 365 users into the server’s Active Directory.
To get users from Office 365 created in a new Windows Active Directory Service:
After deploying OpenStack Keystone, Swift and Horizon I have a need to change the public endpoints for these services from HTTP to HTTPS.
Horizon endpoint
This deployment is a single server for Horizon. The TLS/SSL termination point is on the server (no loadbalancers or such).
To get Horizon using TLS/SSL all that needs to be done is adding the keys, cert, ca and updating the vhost. My vhost not looks like this:
WSGISocketPrefix run/wsgi
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName horizon-os.mwclearning.com.au
ServerAlias api-cbr1-os.mwclearning.com.au
ServerAlias api.cbr1.os.mwclearning.com.au
ServerName portscan.assetowl.ninja
Redirect permanent / https://horizon-os.mwclearning.com.au/dashboard
</VirtualHost>
<VirtualHost *:443>
ServerAdmin [email protected]
ServerName horizon-os.mwclearning.com.au
ServerAlias api-cbr1-os.mwclearning.com.au
ServerAlias api.cbr1.os.mwclearning.com.au
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
Options All
AllowOverride All
Require all granted
</Directory>
<Directory /usr/share/openstack-dashboard/static>
Options All
AllowOverride All
Require all granted
</Directory>
SSLEngine on
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
SSLCertificateKeyFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.key.pem
SSLCertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.cert.pem
SSLCACertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.ca.pem
</VirtualHost>
With a systemctl restart httpd this was working….
Logging into Horizon and checking the endpoints under Project -> Compute -> API Access I can see some more public HTTP endpoints:
Identity http://api.cbr1.os.mwclearning.com:5000/v3/
Object Store http://swift.cbr1.os.mwclearning.com:8080/v1/AUTH_---
These endpoints are defined in Keystone, to see them and edit them there I can ssh to the keystone server and run some mysql queries. Before I do this I need to make sure that the swift and keystone endpoints are configure to use TLS/SSL.
Keystone endpoint
Again the TLS/SSL termination point is apache… so some modification to /etc/httpd/conf.d/wsgi-keystone.conf is all that is required:
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
SSLEngine on
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
SSLCertificateKeyFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.key.pem
SSLCertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.cert.pem
SSLCACertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.ca.pem
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
Alias /identity_admin /usr/bin/keystone-wsgi-admin
<Location /identity_admin>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup keystone-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
</VirtualHost>
So now we should have an endpoint that will decrypt and forward https request from port 443 to the swift listener on port 8080.
Updating internal auth
As keystones auth listener is the same for internal and external (vhost) I also updated the internal address to match the FQDN allowing for valid TLS.
Keystone service definitions
mysql -u keystone -h services01 -p
use keystone;
select * from endpoint;
# Updating these endpoints with
update endpoint set url='https://swift-os.mwclearning.com:8080/v1/AUTH_%(tenant_id)s' where id='579569...';
update endpoint set url='https://api-cbr1-os.mwclearning,com:5000/v3/' where id='637e843b...';
update endpoint set url='http://controller01-int.mwclearning.com:5000/v3/' where id='ec1ad2e...';
Now after restarting the services all is well with TLS!
In the previous post I described some the general direction and ‘wants’ for the next step of our IT Ops, summarised as:
Want
Description
Continuous Deployment
We need to have more automation and resiliency in our deployment, without adding our own code that needs to be changes when archtecture and service decencies change.
Automation of deployments
Deployments, rollbacks, services discovery, easy local deployments for devs
Less time on updates
Automation of updates
Reduced dependence on config management (puppet)
Reduce number of puppet policies that are applied hosts
Image Management
Image management (with immutable post deployment)
Reduce baseline work for IT staff
IT staff have low baseline work, more room for initiatives
Reduce hardware footprint
There can be no increase in hardware resource requirements (cost).
Start with the basics
Lets start with the simple demo deployment supplied by the CoreOS team.
That set up was pretty straight forward (as supplied demos usually are). Simple verification that the k8s components are up and running:
vagrant global-status
#expected output assuming 1 etcd, 1 k8s controller and 2 k8s worker as defined in config.rb
id name provider state directory
----------------------------------------------------------------------------------------------------------
2146bec e1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
87d498b c1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
46bac62 w1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
f05e369 w2 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
#set kubctl config and context
export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
kubectl config use-context vagrant-multi
kubectl get nodes
#expected output
NAME STATUS AGE
172.17.4.101 Ready,SchedulingDisabled 4m
172.17.4.201 Ready 4m
172.17.4.202 Ready 4m
kubectl cluster-info
#expected output
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running atvagran
*Note: It can take some time (5 mins or longer if core-os is updating) for the kubernetes cluster to become available. To see status, vagrant ssh c1 (or w1/w2/e1) and run journalctl -f (following service logs).
Now the dashboard can be access on http://localhost:9090/.
Now lets to some simple k8s examples:
Create a load balanced nginx deployment:
# create 2 containers from nginx image (docker hub)
kubectl run my-nginx --image=nginx --replicas=2 --port=80
# expose the service to the internet
kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer
# list service nodes
kubectl get po
# show service info
kubectl get service my-nginx
kubectl describe service/my-nginx
First interesting point… with simple deployment above, I have already gone awry. Though I have 2 nginx containers (presumably for redundancy and load balancing), they have both been deployed on the same worker node (host). Lets not get bogged down now — will keep working through examples which probably cover how to ensure redundancy across hosts.
1 2
# Delete the service, removes pods and containers
kubectl delete deployment,service my-nginx