Categories
InfoSec Notes ITOps Random

Unable to delete Azure Firewall?

TLDR: Fix it

  • If you have removed/deleted a Firewall policy or attachment to the Azure Firewall – re-attach it, or create the policy/attachment with the same name (you will see the name in the CLI output as detailed below).
  • Once you have re-attached, re-created (just empty policy with same name) you can then delete the Firewall (recommended using Azure Cloud PowerShell) with command:
    • Obviously updating -Name and -ResourceGroup parameters.
Remove-AzFirewall -Name "ZOAK-SecureGateway-Firewall" -ResourceGroupName "ZOAK-SecureAccessGateway-ResourceGroup" -Force

Remove-AzFirewall: Long running operation failed with status ‘Failed’

The Azure UI does not give must detail regarding error messaged
$ Remove-AzFirewall -Name "ZOAK-SecureGateway-Firewall" -ResourceGroupName "ZOAK-SecureAccessGateway-ResourceGroup"
...
Remove-AzFirewall: Long running operation failed with status 'Failed'. Additional Info:'The Resource 'Microsoft.Network/firewallPolicies/ZOAK-SecureGateway-Firewall-BasicPolicy' under resource group 'ZOAK-SecureAccessGateway-ResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix'
StatusCode: 200
ReasonPhrase: OK
Status: Failed
ErrorCode: ResourceNotFound
ErrorMessage: The Resource 'Microsoft.Network/firewallPolicies/ZOAK-SecureGateway-Firewall-BasicPolicy' under resource group 'ZOAK-SecureAccessGateway-ResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix

Once attempting to delete via CLI I actually got a meaningful error message:

Additional Info:'The Resource 'Microsoft.Network/firewallPolicies/ZOAK-SecureGateway-Firewall-BasicPolicy' under resource group 'ZOAK-SecureAccessGateway-ResourceGroup' was not found.

That resource had already been deleted… so, re-recreate (just and empty policy) with same name… attached it, then I could delete.

Categories
ITOps

Output/Export query to file with Oracle on RDS

There are several different ways to do this, I find the easiest is to create a view then use SQL Developer’s Database Export wizard.

Requirements:

Step 1: Create a view

https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-VIEW.html

Step 2: Use SQL Developer’s Database Export wizard to export the view to your desired file format

In SQLDeveloper, Tools → Database Export

  1. Select the correct DB connection
  2. Uncheck ‘Export DDL’
  3. Under ‘Export Data’ change Format to CSV or XLSX (or whatever file type is desired)
    1. Adjust file names and output dir as desired, click Next
  4. Uncheck all except ‘Views’, Next
  5. Ensure that you select the correct schema selected, if the schema is not the default schema for your user, click ‘more’ – select the correct schema and change type from ‘All Objects’ to ‘View’
  6. Click ‘Lookup’ and you will see the view you created in Step 1
  7. Select the View and hit the blue arrow to move the view into the lower box, then click next, review and Finish.. your export will now run with a status box for the task.
Categories
ITOps

List and clone all github repos (for org) with PowerShell

Step 1: List all repos

Step 2: Clone all repos (pulling where repo dir exists)

Categories
InfoSec Notes ITOps Random

Eramba Community 2019 in Docker (docker-compose)

Eramba is an excellent open source Governance Risk and Compliance tool. Recently (10-SEP-2019), a new major release of the community version came out. Previously I used https://github.com/digitorus/eramba which was based on https://hub.docker.com/r/k0st/alpine-eramba/ to start eramba instances quickly with docker and docker-compose.

As I could not find an updated version of these for the new release I have made one. The repo for this, 2019 community version (specifically c2.4.1) can be found here: https://github.com/markz0r/eramba-community-docker

Follow the steps in README.md and you should be testing the new eramba in no time.

Mar, 2020: Updated for community edition 2.8.1

Thanks to the team at Eramba for making the tool available for all!

Categories
InfoSec Notes ITOps

OWASP Top 10 using AWS WAF Service

We have a web application that has been running on AWS for several years. As application load balancers and the AWS WAF service was not available, we utilised and external classic ELB point to a pool of EC2 instances running mod_security as our WAF solution. Mod_security was using the OWASP Mod_security core rule set.

Now that Application Load Balancers and AWS WAFs are available, we would like to remove the CPU bottleneck which stems from using EC2 instances with mod security as the current WAF.


Step 1 – Base-lining performance with EC2 WAF solution.

The baseline was completed using https://app.loadimpact.com where we ran 1000 concurrent users, with immediate rampup. On our test with 2 x m5.large EC2 instances as the WAF, the WAFs became CPU pinned within 2mins 30 seconds.

This test was repeated with the EC2 WAFs removed from the chain and we averaged 61ms across the loadimpact test with 1000 users. So – now we need to implement the AWS WAF solution so that can be compared.


Step 2 – Create an ‘equivalent’ rule-set and start using AWS WAF service.

We used terraform for this environment so the CloudFormation web ACL and rules are not being used and I will start be testing out the terraform code upload by traveloka. After having a look at the code in more detail I decided I need to get a better understanding of the terraform modules (and the AWS service) so I will write some terraform code from scratch.

So – getting started with the AWS WAF documentation we read, ‘define your conditions, combine your conditions into rules, and combine the rules into a web ACL.

  • Conditions: Request strings, source IPs, Country/Geo location of request IP, Length of specified parts of the requests, SQL code (SQL injection), header values (i.e.: User-Agent). Conditions can be multiple values and regex.
  • Rules: Combinations of conditions along with an ACTION (allow/block/count). There are Regular rules whereby conditions can be and/or chained. Rate-based rules where by the addition of a rate-based condition can be added.
  • Web ACLs: Whereby the action for rules are defined. Multiple rules can have the same action, thus be grouped in the same ACL. The WAF uses Web ACLs to assess requests against rules in the order which the rules are added to the ACL, whichever/if any rules is matched first defines which action is taken.

Starting simple: To get started I will implement a rate limiting rule which limits 5 requests per minute to our login page from a specified IP along with the basic OWASP rules from terraform code upload by traveloka . Below is our main.tf with the aws_waf_owasp_top_10_rules created for this test.


Step 3 – Validate functions of AWS WAF

To confirm blocking based on the rate limiting rule I am using Apache’s Benchmarking tool, ab.

ab -v 3 -n 2000 -c 100 https://<my_target.com.au>/login  > ab_2000_100_waf_test.log

This command logs request headers (-v 3 for verbosity of output), makes 2000 requests (-n 2000) and conducts those request 100 concurrently (-c 100). I can then see failed requests by tailing the output:

tail -f ./ab_2000_100_waf_test.log  | grep -i response

All looks good for the rate limiting based blocking, though it appears that blocking does not occur are exactly 2000 requests in the 5 minute period. It also appears that there is a significant (5-10min) delay on metrics coming through to the WAF stats in the AWS console.

AWS console about 10 mins after running the HTTP AB tool we can see blocks

The blocks are HTTP 403 responses from the ELB:

WARNING: Response code not 2xx (403)
LOG: header received:
HTTP/1.1 403 Forbidden
Server: awselb/2.0
Date: Mon, 01 Jul 2019 22:39:11 GMT
Content-Type: text/html
Content-Length: 134
Connection: close

After success on the rate limiting rule, the OWASP Top 10 mitigation rules need to be tested. I will use Owasp Zap to generate some malicious traffic and see when happen!

So it works – which is good, but I am not really confident about the effectiveness of the OWASP rules (as implemented on the AWS WAF). For now, they will do… but some tuning will probably be desirable as all of the requests OWASP ZAP made contained (clearly) malicious content but only 7% (53 / 755) of the requests were blocked by the WAF service. It will be interesting to see if there are false positives (valid requests that are blocked) when I conduct step 4, performance testing.


Step 4 – Conduct performance test using AWS WAF service, and

Conducting a load test with https://app.loadimpact.com demonstrated that the AWS WAF service is highly unlikely to become a bottleneck (though this may differ for other applications and implementations).


Step 5 – Migrate PROD to the AWS WAF service.

Our environment is fully ‘terraformed’, implementing the AWS WAF service as part of our terraform code was working within an hour or so (which is good time for me!).


Next Steps

Security Automatons: https://aws.amazon.com/solutions/aws-waf-security-automations/, is this easy to do with Terraform? https://github.com/awslabs/aws-waf-security-automations has:

  • waf-reactive-blacklist
  • waf-bad-bot-blocking
  • waf-block-bad-behaving
  • waf-reputation-lists
Categories
ITOps

Packer and Ansible testing with Hyper-V (on Windows 10)

Why?

With almost all of our clients now preferring AWS and Azure for hosting VMs / Docker containers we have to manage a lot of AMIs / VM images. Ensuring that these AMIs are correctly configured, hardened and patched is a critical requirement. To do this time and cost effectively, we use packer and ansible. There are solutions such as Amazon’s ECS that extend the boundary of the opaque cloud all the way to the containers, which has a number of benefits but does not currently meet a number of non-functional requirements for most of our clients. If those non-functional requirements we gone, or met by something like AWS ECS, it would be hard to argue against simply using terraform and ecs – removing our responsibility for managing the docker host VMs.

Anyway, we are making some updates to our IaaS code base which includes a number of new requirements and code changes to our packer and ansible code. To make these changes correctly and quickly I need a build/test cycle that is as short as possible (shorted than spinning up a new EC2 instance). Fortunately, one of the benefits of packer is the ‘cloud agnosticism’… so theoretically I should be able to test 99% of my new packer and ansible code on my windows 10 laptop using packer’s Hyper-V Builder.

Setting up

I am running Windows 10 Pro on a Dell XPS 15 9560. VirtualBox is the most common go-to option for local vm testing but thats not an option if you are already running Hyper-V (which I am). So to get things started we need to:

  1. Have a git solution for windows – I am using Microsoft’s VS Code (which is really a great opensource tool from M$)
  2. Install packer for windows, ensuring the executable is in the Windows PATH
  3. Create VM in Hyper-V to act as a base template (I am using Centos 7 minimal as we use https://www.centos.org/download/CentOs AMIs on AWS)
  4. Install Hyper-V Linux Integration Services on the Centos 7 base VM (this is required for Packer to be able to determine new VMs’ IP addresses) – if you are stuck with packer failing to connect with SSH to the VM and you are using a Hyper-V switch this will most likely be the issue
  5. Add a Hyper-V builder to our packer.json (as below)
...
  {
    "clone_from_vm_name": "sonet-ami-template",
    "shutdown_command": "echo 'packer' | sudo -S shutdown -P now",
    "headless": true,
    "ssh_password": "{{user `ssh_username`}}",
    "ssh_username": "{{user `ssh_username`}}",
    "switch_name": "Default Switch",
    "type": "hyperv-vmcx"
  }
...

Now, assuming the packer and ansible code is in a funcitonal state, I can build a new VM and run packer + ansible via powershell (run with administrative privileges) with:

packer build --only hyperv-vmcx packer.json
Categories
InfoSec Notes ITOps

AWS RDS (Oracle 12c) Offsite Backups

A lot of people need to do offsite backups for AWS RDS – which can be done trivially within AWS. If you need offsite backups to protect you against things like AWS account breach or AWS specific issues – offsite backups must include diversification of suppliers.

I am going to use Amazon’s Data Migration service to replicate AWS RDS data to a VM running in Azure and set up snapshots/backups of the Azure hosts.

The new (2018) AWS Data Migration Service solve offisite RDS backup problems

The steps I used to do this are:

  1. Set up an Azure Windows 2016 VM
  2. Create an IPSec tunnel between the Azure Windows 2016 VM and my AWS Native VPN
  3. Install matching version of Oracle on the Windows 2016 VM
  4. Configure Data Migration service
  5. Create a data migration and continuous replication task
  6. Snapshots/Backups and Monitoring
  7. Debug and Gotchyas

1,2 – Set up Azure Windows 2016 VM and IPSec tunnel

Create Network on Azure and place a VM in the network with 2 interfaces. One interface must have an public IP, call this one ‘external’ and the other inteface will be called ‘internal’ – Once you have the public IP address of your Windows 2016 VM, create a ‘Customer Gateway’ in your AWS VPC pointing to that IP. You will also need a ‘Virual Private Gateway’ configured for that VPC. Then create a ‘Site-to-Site VPN connection’ in your VPC (it won’t connect for now but create it anyway). Configure your Azure Win 2016 VM to make an IPSec tunnel by following these instructions (The instructions are for 2012 R2 but the only tiny difference is some menu items):
https://docs.aws.amazon.com/vpc/latest/adminguide/customer-gateway-windows-2012.html#cgw-win2012-download-config. Once this is completed both your AWS site-to-site connection and your Azure VM are trying to connect to each other. Ensure that the Azure VM has its security groups configured to allow your AWS site-to-site vpn to get to the Azure VM (I am not sure which ports and protocols specifically, I just white-listed all traffic from the two AWS tunnel end points. Once this is done it took around 5 mins for the tunnel to come up (I was checking the status via the AWS Console), I also found that it requires traffic to be flowing over the link, so I was running a ping -t <aws_internal_ip> from my Azure VM. Also note that you will need to add routes to your applicable AWS route tables and update AWS security groups for the Azure subnet as required.

3 – Install matching version of Oracle on the Windows 2016 VM

4,5 – Configure Data Migration service and migration/replication

Log into your AWS console and go to ‘Data Migration Service’ / ‘DMS’ and hit get started. You will need to set up a replication VM (well atleast pick a size, security group, type etc). Note that the security group that you add the replication host to must have access to both your RDS and your Azure DBs – I could not pick which subnet the host went into so I had to add routes for a couple more subnets that expected. Next you will need to add your source and target databases. When you add in the details and hit test the wizard will confirm connectivity to both databases. I ran into issue on both of these points because of not adding the correct security groups, the windows firewall on the Azure VM and my VPN link dropping due to no traffic (I am still investigating a fix better than ping -t for this). Next you will be creating a migration/replication task, if you are going to be doing ongoing replication you need to run the following on your Oracle RDS db:

  • exec rdsadmin.rdsadmin_util.set_configuration(‘archivelog retention hours’, 24);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘ADD’,’ALL’);
  • exec rdsadmin.rdsadmin_util.alter_supplemental_logging(‘DROP’,’PRIMARY KEY’);

You can filter by schema, which should provide you with a drop down box to select which schema/s. Ensure that you enable logging on the migration/replication task (if you get errors, which I did the first couple of attempts, you won’t be fixing anything without the logs.

6 – Snapshots and Monitoring

For my requirements, daily snapshots/backups of the Azure VM will provide sufficient coverage. The Backup vault must be upgraded to v2 if you are using a Standrd SSD disk on the Azure VM, see:
https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade . To enable email notifications for Azure backups, go to the azure portal, select the applicable vault, click on ‘view alerts’ -> ‘Configure notifications’ -> enter an email address and check ‘critical’ (or what type of email notifications you want. Other recommended monitoring checks include: ping for VPN connectivity, status check of DMS task (using aws cli), SQL query on destination database confirming latest timestamp of a table that should have regular updates.

7 – Debug and Gotchyas

  • Azure security group allowing AWS vpn tunnel endpoint to Azure VM
  • Windows firewall rule on VM allowing Oracle traffic (default port 1521) from AWS RDS private subnet
  • Route tables on AWS subnets to route traffic to your Azure subnet via the Virtual Private Network
  • Security groups on AWS to allow traffic from Azure subnet
  • Stability of the AWS <–> Azure VM site-to-site tunnel requires constant traffic
  • The DMS replication host seems to go into an arbitrary subnet of your VPC (there probably some default setting I didn’t see) but check this and ensure it has routes for the Azure site-to-site
  • Ensure the RDS Oracle database has the archive log retention and supplemental logs settings as per steps 4,5.
  • Azure backup job fails with ‘Currently Azure Backup does not support Standard SSD disks’. – upgrade backup vault: https://docs.microsoft.com/en-us/azure/backup/backup-upgrade-to-vm-backup-stack-v2#upgrade
Categories
ITOps

Windows Remote Desktop Services 2016 Review

It has been a long while since I looked at RDS – with Azure, Office 365 and Server 2016 there seems to be a lot of new (or better) options. To get across some of the options I have decided to do a review of Microsoft’s documentation with the aim of deciding on a solution for a client. The specific scenario I am looking at is a client with low spec workstations, using Office 365 Business Premium (including OneDrive), Windows 10 and have a single Windows 2016 Virtual Private Server.

Some desired features:

  • Users should be able to use their workstations or the remote desktop server interchangeably
  • Everything done on workstations should be replicated to the RDS server and visa-versa
  • Contention on editing documents should be dealt with reasonably
  • The credential for signing into workstations, email and remote services should be the same (ideally with a 2FA option for RDS)

Issues faced:

  • The Office 365 users were created several months before the RDS server was deployed
    • The Azure AD connect service which synchronizes users in an Active Directory deployment with Office 365 user (Azure AD) is a one way street, assuming the ‘on-prem’ active directory object exist already and only need to be create in Azure AD (Office 365) – see the work around for this here
  • Office 365 licensing for ‘shared’ computers means that Office 365 Business Premium users can’t use a VPS – so entrerprise plans of business plus must be used.

How to configure Remote Desktop Services? After getting Active Directory installed and configure to sync with Azure AD I now need choose and implement the RDS configuration.

Starting with the Microsoft Doc we have the following options:

  • Session-based virtualization – Many users per host
  • VDI – Virtual machine for each user — note that if your server is already a VM this isnt really an option (nested VMs are not ideal)

Based on our clients situation – session-based make much more sense for now. Next up – what are we going to publish to the users logging into remote desktop service?

  • Desktops – Providing users with the full desktop experience
  • RemoteApps – Users run apps that seem to be running locally but are in fact being served via RDS

Desktops makes sense for now. So – how do we set up a Session-based desktop for remote access by multiple users?

  1. Add the required roles to the RDS servers (see: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure)
  2. Create an AD service and link it to office 365 with Azure AD Connect

As Microsoft says:

  • You still must have an internet-facing server to utilize RD Web Access and RD Gateway for external users
  • You still must have an Active Directory and–for highly available environments–a SQL database to house user and Remote Desktop properties
  • You still must have communication access between the RD infrastructure roles (RD Connection Broker, RD Gateway, RD Licensing, and RD Web Access) and the end RDSH or RDVH hosts to be able to connect end-users to their desktops or applications.

After setting all of this up I am very happy with the results. The single source of truth for user must be the ‘on-prem’ AD. Syncing an on-prem AD service to Office 365 is almost seemless with some miner tweak required that are fairly easy to find with some googling.

Categories
ITOps

Sync users from Office 365 for a new Active Directory Install

Importing users from Office 365 to an on Prem-AD can be required in cases where an organisation who has been using Office 365 wants to start using a Remote Desktop Service or alike. To reduce the number of passwords and provide single sign on (or at least same sign on) the Windows Server my have Azure AD connect installed and be syncing with the businesses Office 365 account. The problem is that out of the box Azure AD connect is a one way street. It only creates object on the Azure side – it does not import Office 365 users into the server’s Active Directory.

To get users from Office 365 created in a new Windows Active Directory Service:

## Connect to Office 365 with PowerShell
Set-ExecutionPolicy Unrestricted -Force # can be reverted at the end
$O365CREDS = Get-Credential
$SESSION = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential $O365CREDS -Authentication Basic -AllowRedirection
Import-PSSession $SESSION
Connect-MsolService -Credential $O365CREDS 
# For more details see: https://oddytee.wordpress.com/2013/03/21/connect-to-office-365-with-powershell/

## Get a list of Office 365 User Objects 
Get-User | Export-Csv "C:\O365Export.csv" -NoTypeInformation 

## Use that list to create new AD users - slightly modified from source material
import-csv C:\O365Export.csv -Encoding UTF8 | foreach-object {New-ADUser -Name ($_.Firstname + "." + $_.Lastname) -SamAccountName ($_.Firstname + "." + $_.Lastname) -GivenName $_.FirstName -Surname $_.LastName -City $_.City -Department $_.Department -DisplayName $_.DisplayName -Fax $_.Fax -MobilePhone $_.MobilePhone -Office $_.Office -PasswordNeverExpires ($_.PasswordNeverExpires -eq "True") -OfficePhone $_.PhoneNumber -PostalCode $_.PostalCode -EmailAddress $_.SignInName -State $_.State -StreetAddress $_.StreetAddress -Title $_.Title -UserPrincipalName $_.UserPrincipalName -Enabled $True -AccountPassword (ConvertTo-SecureString -string "Password" -AsPlainText -force) }

Sources:

Categories
Introduction to OpenStack ITOps

Changing OpenStack endpoints from HTTP to HTTPS

After deploying OpenStack Keystone, Swift and Horizon I have a need to change the public endpoints for these services from HTTP to HTTPS.

Horizon endpoint

This deployment is a single server for Horizon. The TLS/SSL termination point is on the server (no loadbalancers or such).

To get Horizon using TLS/SSL all that needs to be done is adding the keys, cert, ca and updating the vhost. My vhost not looks like this:

WSGISocketPrefix run/wsgi
<VirtualHost *:80>
	ServerAdmin [email protected]
	ServerName horizon-os.mwclearning.com.au
	ServerAlias api-cbr1-os.mwclearning.com.au
	ServerAlias api.cbr1.os.mwclearning.com.au
	ServerName portscan.assetowl.ninja
	Redirect permanent / https://horizon-os.mwclearning.com.au/dashboard
</VirtualHost>

<VirtualHost *:443>
	ServerAdmin [email protected]
	ServerName horizon-os.mwclearning.com.au
	ServerAlias api-cbr1-os.mwclearning.com.au
	ServerAlias api.cbr1.os.mwclearning.com.au

	WSGIDaemonProcess dashboard
	WSGIProcessGroup dashboard
	WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
	Alias /dashboard/static /usr/share/openstack-dashboard/static

	<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
		Options All
		AllowOverride All
		Require all granted
	</Directory>

	<Directory /usr/share/openstack-dashboard/static>
		Options All
		AllowOverride All
		Require all granted
	</Directory>
	SSLEngine on
	SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
	SSLCertificateKeyFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.key.pem
	SSLCertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.cert.pem
	SSLCACertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.ca.pem
</VirtualHost>

With a systemctl restart httpd this was working….

Logging into Horizon and checking the endpoints under Project -> Compute -> API Access I can see some more public HTTP endpoints:

Identity	http://api.cbr1.os.mwclearning.com:5000/v3/
Object Store	http://swift.cbr1.os.mwclearning.com:8080/v1/AUTH_---

These endpoints are defined in Keystone, to see them and edit them there I can ssh to the keystone server and run some mysql queries. Before I do this I need to make sure that the swift and keystone endpoints are configure to use TLS/SSL.

Keystone endpoint

Again the TLS/SSL termination point is apache… so some modification to /etc/httpd/conf.d/wsgi-keystone.conf is all that is required:

Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone.log
    CustomLog /var/log/httpd/keystone_access.log combined

    <Directory /usr/bin>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
    </Directory>
Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
    SetHandler wsgi-script
    Options +ExecCGI

    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
</Location>
	SSLEngine on
	SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
	SSLCertificateKeyFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.key.pem
	SSLCertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.cert.pem
	SSLCACertificateFile /etc/pki/apache/wildcard.mwclearning.com.au-201710.ca.pem
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone.log
    CustomLog /var/log/httpd/keystone_access.log combined

    <Directory /usr/bin>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
    </Directory>
Alias /identity_admin /usr/bin/keystone-wsgi-admin
<Location /identity_admin>
    SetHandler wsgi-script
    Options +ExecCGI

    WSGIProcessGroup keystone-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
</Location>
</VirtualHost>

I left the internal interface as HTTP for now…

Swift endpoint

OK so swift one is a bit different… its actually recommended to have an SSL termination service in front of the swift proxy see: https://docs.openstack.org/security-guide/secure-communication/tls-proxies-and-http-services.html

With that recommendation from OpenStack and ease of creating an apache reverse proxy – I will do that.

# install packages
sudo yum install httpd mod_ssl

After install create a vhost  /etc/httpd/conf.d/swift-endpoint.conf contents:

<VirtualHost *:443>
  ProxyPreserveHost On
  ProxyPass / http://127.0.0.1:8080/
  ProxyPassReverse / http://127.0.0.1:8080/

  ErrorLog /var/log/httpd/swift-endpoint_ssl_error.log
  LogLevel warn
  CustomLog /var/log/httpd/swift-endpoint_ssl_access.log combined

  SSLEngine on
  SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
  SSLCertificateKeyFile /etc/httpd/tls/wildcard.mwclearning.com.201709.key.pem
  SSLCertificateFile /etc/httpd/tls/wildcard.mwclearning.com.201709.cert.pem
  SSLCertificateChainFile /etc/httpd/tls/wildcard.mwclearning.com.201709.ca.pem
</VirtualHost>
#resart apache
systemctl restart httpd

So now we should have an endpoint that will decrypt and forward https request from port 443 to the swift listener on port 8080.

Updating internal auth

As keystones auth listener is the same for internal and external (vhost) I also updated the internal address to match the FQDN allowing for valid TLS.

Keystone service definitions

mysql -u keystone -h services01 -p
use keystone;
select * from endpoint;
# Updating these endpoints with 
update endpoint set url='https://swift-os.mwclearning.com:8080/v1/AUTH_%(tenant_id)s' where id='579569...';
update endpoint set url='https://api-cbr1-os.mwclearning,com:5000/v3/' where id='637e843b...';
update endpoint set url='http://controller01-int.mwclearning.com:5000/v3/' where id='ec1ad2e...';

Now after restarting the services all is well with TLS!