Testing Kubernetes and CoreOS

In the previous post I described some the general direction and ‘wants’ for the next step of our IT Ops, summarised as:

Want Description
Continuous Deployment We need to have more automation and resiliency in our deployment, without adding our own code that needs to be changes when archtecture and service decencies change.
Automation of deployments Deployments, rollbacks, services discovery, easy local deployments for devs
Less time on updates Automation of updates
Reduced dependence on config management (puppet) Reduce number of puppet policies that are applied hosts
Image Management Image management (with immutable post deployment)
Reduce baseline work for IT staff IT staff have low baseline work, more room for initiatives
Reduce hardware footprint There can be no increase in hardware resource requirements (cost).

Start with the basics

Lets start with the simple demo deployment supplied by the CoreOS team.

https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html

That set up was pretty straight forward (as supplied demos usually are).  Simple verification that the k8s components are up and running:

*Note: It can take some time (5 mins or longer if core-os is updating) for the kubernetes cluster to become available. To see status, vagrant ssh c1 (or w1/w2/e1) and run journalctl -f (following service logs).

Accessing the kubernetes dashboard requires tunnelling, which if using the vagrant set up can be accomplished with: https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08 (note, that is for single node, if using multinode then change line 21 to:

Now the dashboard can be access on http://localhost:9090/.

Now lets to some simple k8s examples:

Create a load balanced nginx deployment:

First interesting point… with simple deployment above, I have already gone awry. Though I have 2 nginx containers (presumably for redundancy and load balancing), they have both been deployed on the same worker node (host). Lets not get bogged down now — will keep working through examples which probably cover how to ensure redundancy across hosts.

Reviewed config file (pod) options: http://kubernetes.io/docs/user-guide/configuring-containers/

Deploy demo application

https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/guestbook/README.md

  1. create service for redis master, redis slaves and frontent
  2. create a deployment for redis master, redis slaves and frontend

Pretty easy.. now how do we get external traffic to the service? Either NodePort’s, Loadbalancers or ingress resource (?).

Next lets look at how to extend Kubernetes to

Why look at Kubernetes and CoreOS

We are currently operating a service oriented architecture that is ‘dockerized’ with both host and containers running CentOS 7 when deployed straight on top of ec2 instances. We also have a deployment pipline with beanstalk + homegrown scripts. I imagine our position/maturity is similar to a lot of SMEs, we have aspirations of being on top of new technologies/practices but are some where in between old school and new school:

Old School New School
IT and Dev separate Devops (Ops and Devs have the same goals and responsibilities)
Monolithic/Large services Microservices
Big Releases Continuous Deployment
Some Automation Almost total automation with self-service
Static scaling Dynamic scaling
Config Management Image management (with immutable deployments)
IT staff have a high baseline work IT staff have low baseline work, more room for initiatives

This is not about which end of this incomplete spectrum is better… we have decided that for our place in the world, moving further the left is desirable. I know there are a lot of experienced IT operators that take this view:

Why CoreOS for Docker Hosts?

CoreOS: A lightweight Linux operating system designed for clustered deployments providing automation, security, and scalability for your most critical applications – https://coreos.com/why/

Our application and supporting services run in docker, there should not be any dependencies on the host operating system (apart from the docker engine and storage mounts).

Some questions I ask myself now:

  • Why do I need to monitor for and stage deployments of updates?
  • Why am I managing packages on a host OS that could be immutable (like CoreOS is, kind of)?
  • Why am I managing what should be homogeneous machines with puppet?
  • Why am I nursing host machines back to health when things go wrong (instead of blowing them away and redeploying)?
  • Why do I need to monitor SE Linux events?

I want a Docker Host OS that is/has:

  • Smaller, Stricter, Homogeneous and Disposable
  • Built in hosts and service clustering
  • As little management as possible post deployment

CoreOS looks good for removing the first set of questions and sufficing the wants.

Why Kubernetes?

Kubernetes: “A platform for automating deployment, scaling, and operations of application containers across clusters of hosts” – http://kubernetes.io/docs/whatisk8s/

Some questions I ask myself now:

  • Should my deployment, monitoring and scaling completely separate or be a platform?
  • Why do I (IT ops) still need to be around for prod deployments (no automatic success criteria for staged deploys and not automatic rollback)?
  • Why are our deployment scripts so complex and non-portable
  • Do I want a scaling solution outside of AWS Auto-Scaling groups?

I want a tool/platform to:

  • Streamline and rationalise our complex deployment process
  • Make monitoring, scaling and deployment more manageable without our lines of homebaked scripts
  • Generally make our monitoring, scaling and deployment more able to meet changing requirements

Kubernetes looks good for removing the first set of questions and sufficing the wants.

Next steps

  • Create a CoreOS cluster
  • Install Kubernetes on the cluster
  • Deploy an application via Kubernetes
  • Assess if CoreOS and Kubernetes take us in a direction we want to go

Monitoring client side performance and javascript errors

The rise of single page apps (ie AngularJS) present some interesting problems for Ops. Specifically, the increased dependence on browser executed code means that real user experience monitoring is a must.

apm_logos

To that end I have reviewed some javascript agent monitoring solutions:

The solution/s must have the following requirements:

  • Must have:
    • Detailed javascript error reporting
    • Negligible performance impact
    • Real user performance monitoring
    • Effective single page app (AnglularJS support)
    • Real time alerting
  • Nice to have:
    • Low cost
    • Easy to deploy and maintain integration
    • Easy integration with tools we use for notifications (icinga2, Slack)

As our application is a single page Angular app, New Relic Browser requires that we pay US$130 for any single page app capability. The JavaScript error detection was not very impressive as uncaught exceptions outside of the angular app were not reported without angular integration.

Google Analytics with custom event push does not have any real time alerting which disqualifies it as an Ops solution.

AppDynamics Browser was easy to integrate, getting javascript error details in the console was straight forward but getting those errors to communication tools like slack was surprisingly difficult. Alerts are based on health checks which are breaking of metric thresholds – so I can send an alert saying there was more than 0 javascript errors in the last minute. But no details about the error and no direct link to the error.

Sentry.io simple to add monitoring, simple to get alerting with click through to all the javascript error info. No performance monitoring.

Conclusion sticking to the Unix philosophy, using sentry.io for javascript error alerting and AppDynamics Browser Lite for performance alerting. Both have free levels to get started (ongoing, not just 30 day trial).

Getting started with Gatling – Part 2

With the basics of Simulations, Scenarios, Virtual Users, Sessions, Feeders, Checks, Assertions and Reports down –  it’s time to think about what to load test and how.

Will start with a test that tries to mimic the end user experience. That means that all the 3rd party javascript, css, images etc should be loaded. It does not seem reasonable to say our loadtest performance was great but none of our users will get a responsive app because of all those things we depend on (though, yes, most of it will likely already be cached by the user). This increases the complexity of the simulation scripts as there will be lots of additional resource requests cluttering things up. It is very important for maintainability to avoid code duplication and use the singleton object functionality available.

Using the recorder

As I want to include CDN calls, I tried the recorder’s ‘Generate CA’ functionality. This is supposed to generate certs on the fly for each CN. This would be convenient as I could just trust a locally generated CA and not have to track down and trust all sources. Unfortunately I could not get the recorder to generate its own CA, and when using a local CA generated with openssl I could not feed the CA password to the recorder. I only spent 15 mins trying this until reverting to the default self signed cert. Reviewing Firefox’s network panel (Firefox menu -> Developer -> Network ) shows any blocked sources which can then be visited directly and trusted with our fake cert (there are some fairly serious security implications of doing this, I personally only use my testing browser (firefox) with these types of proxy tools and never for normal browsing).

The recorder is very handy for getting the raw code you need into the test script, it is not a complete test though. Next up is:

  1. Dealing with authentication headers –  The recorded simulation does not set the header based on response from login attempt
  2. Requests dependent on the previous response – The recorder does not capture this dependency it only see the raw outbound requests so there will need to be consideration on parsing results
  3. Validating responses

Dealing with authentication headers

The Check API is used for verifying that the response to a request matches expectations and capturing some elements in it.

After half an hour or so of playing around the Check API, it is behaving as I want thanks to good, concise doc.

The “.check” is looking for the header name “Set-Cookie” then extracting the auth token using a regex and finally saving the token as a key called auth_token.

In subsequent requests I need to include a header containing this value, and some other headers. So instead of listing them out each time a function makes things much neater:

Its also worth noting that to ensure that all this was working as expected I modified /conf/logback.xml to output all HTTP request response data to stdout.

Requests dependent on the previous response

With many modern applications, the behaviour of the GUI is dictated by responses from an API. For example, when a user logs in, the GUI requests a json file with all (max 50) of the users open requests. When the GUI received this, the requests are rendered. In many cases this rendering process involves many more HTTP requests that depending on the time and state of the users which may vary significantly. So… if we are trying to imitate end user experience instead of requesting the render info for the same open requests all of the time, we should parse the json response and adjust subsequent requests accordingly. Thankfully gatling allows for the use of JsonPath. I got stuck trying to get all of the id vals out of a json return and then create requests for each of them. I had incorrectly assumed that the EL Gatling provided ‘random’ function could be called on a vector. This meant I thought the vector was ‘undefined’ as per the error message. The vector was in fact as expected which was clear by printing it.

To run queries with all of the values pulled out of the json response we can use the foreach component. Again got stuck for a little while here. Was putting the foreach competent within an exec function, where (as below) it should be outside of an exec and reference a chain the contains an exec.

Validating responses

What do we care about in responses?

  1. HTTP response headers (generally expecting 200 OK)
  2. HTTP response body contents – we can define expectations based on understanding of app behaviour
  3. Response time – we may want to define responses taking more than 2000ms as failures (queue application performance sales pitch)

Checking response headers is quite simple and can be seen explicitly above in .check(status.is(200). In fact, there is no need for 200 checks to be explicit as “A status check is automatically added to a request when you don’t specify one. It checks that the HTTP response has a 2XX or 304 status code.”checks.

HTTP response body content checks are valuable for ensuring the app behaves as expected. They also require a lot of maintenance so it is important to implement tests using code reuse where possible. Gatling is great for this as we can use the scala and all the power that comes with it (ie: reusable objects and functions across all tests).

Next up is response time checks. Note that these response times are specific to the HTTP layer and do not infer a good end user experience. Javascript and other rendering, along with blocking requests mean that performance testing at the HTTP layer is incomplete performance testing (though it is the meat and potatoes).
Gatling provides the Assertions API to conduct checks globally (on all requests). There are numerous scopes, statistics and conditions to choose from there. For specific operations, responseTimeInMillis and latencyInMillis are provided by Gatling – responseTimeInMillis includes the time is takes to fully send the request and fully receive the response (from the test host). As a default I use responseTimeInMillis as it has slightly higher coverage as a test.

These three verifications/tests can be seen here:

That’s about all I need to get started with Gatling! The next steps are:

  1. extending coverage (more tests!)
  2. putting processes in place to notify and act on identified issues
  3. refining tests to provide more information about the likely problem domain
  4. making a modular and maintainable test library that can be updated in one place to deal with changes to app
  5. aggregating results for trending and correlation with changes
  6. spin up and spin down environments specifically for load testing
  7. jenkins integration

Getting started with Gatling – Part 1

With the need to do some more effective load testing I am getting started with Gatling. Why Gatling and not JMeter? I have not used either so I don’t have a valid opinion. I made my choice based on:

Working through the Gatling Quickstart

Next step is working through the basic doc: http://gatling.io/docs/2.2.1/quickstart.html#quickstart. Pretty simple and straightforward.

Moving on to the more advanced tutorial: http://gatling.io/docs/2.2.1/advanced_tutorial.html#advanced-tutorial. This included:

  • creating objects for process isolation
  • virtual users
  • dynamic data with Feeders and Checks
  • First usage of Gatling’s Expression Language (not rly a language o_O)

The most interesting function:

…Simulation‘s are plain Scala classes so we can use all the power of the language if needed.

Next covered off the key concepts in Gatling:

  • Virtual User -> logical grouping of behaviours ie: Administrator(login, update user, add user, logout)
  • Scenario -> define Virtual Users behaviours ie: (login, update user, add user, logout)
  • Simulation -> is a description of the load test (group of scenarios, users – how many and what rampup)
  • Session -> Each virtual user is back by a Session this can allow for sharing of data between operations (see above)
  • Feeders -> Method for getting input data for tests ie: login values, search and response values
  • Checks -> Can verify HTTP response codes and capture elements of the response body
  • Assertions -> Define acceptance criteria (slower than x means failure)
  • Reports -> Aggregated output

Last review for today was of presentation by Stephane Landelle and Romain Sertelon,  the authors of Gatling:

Next step is to implement some test and figure out a good way to separate simulations/scenarios and reports.

Transitioning from standard CA to LetEncrypt!

With the go-live of https://letsencrypt.org/ its time to transition from the pricy and manual standard SSL cert issuing model to a fully automated process using the ACME protocol. Most orgs have numerous usages of CA purchased certs, this post will cover hosts running apache/nginx and AWS ELBs, all of these usages are to be replaced with automated provisioning and renewal of letsencrypt signed certs.

Provisioning and auto-renewing Apache and nginx TLS/SSL certs

For externally accessible sites where Apache/Nginx handles TLS/SSL termination moving to letsencrypt is quick and simple:

1 – Install the letsencrypt client software (there are RHEL and Centos rpms – so thats as simple as adding the package to puppet policies or

2 – Provision the keys and certificates for each of the required virtual hosts. If a virtual host has aliases, specify multiple names with the -d arg.

This will provision a key and certificate + chain to the letsencrypt home directory (defaults /etc/letsencrypt). The /etc/letsencrypt/live directory contains symlinks to the current keys and certs.

3 – Update the apache/nginx virtualhost configs to use the symlinks maintained by the letsencrypt client, ie:

4 – Create a script for renewing these certs, something like:

5 – Run this script automatically everyday with cron or jenkins

6 – Monitoring the results of the script and externally monitor the expiry dates of your certificates (something will go wrong one day)

Provisioning and auto-renewing AWS Elastice Load Balancer TLS/SSL certs

This has been made very easy by Alex Gaynor with a handy python script: https://github.com/alex/letsencrypt-aws. This is a great use-case for docker and Alex has created a docker image for the script: https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. To use this with ease I created a layer on top creating a new Dockerfile:

The explanation of these values can be found at https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. Its quite important to create a specific IAM User to conduct the required Route53/S3 and ELB actions. This images need to be build on changes:

With this image built another cron or jenkins job can be run daily executing something like:

Again, the job must be monitored along with external monitoring of certificates. See a complete SSL checker at https://github.com/markz0r/tools/tree/master/ssl_check_complete.

Download all Evernote attachments via Evernote API with Python

Python script for downloading snapshots of all attachments in all of your Evernote notebooks.

Downloading Google Drive with Python via Drive API

Python script for downloading snapshots of all file in your google drive, including those shared with you.

source: https://github.com/SecurityShift/tools/blob/master/backup_scripts/google_drive_backup.py

Intro to DevOps – Lesson 3

Topics:

  • Continuous integration/Delivery (Jenkins)
    • Automate from commits to repo to build to test to deploy
  • Monitoring (Graphite)

At minimum there will be 6 environments

  1. local (dev workstations)
  2. dev (sandbox)
  3. integration (test build and side effects)
  4. test (UAT, Performance, QA may be many environments)
  5. staging (live data? – replication of production)
  6. production

From coding to prod

If the hand over of each of these steps was manuals there are too many opportunities for delays and errors. Sooo:

  • Continuous integration
    • Maintain a code repository (git)
    • Automate the build (Jenkins, TravisCI, CircleCI)
    • Test the build (Jenkins, TravisCI, CircleCI)
    • Commit changes often (manual)
    • Build each commit (Jenkins)
    • Fix bugs immediately (manual)
    • Test in a clone environment (test suites)

Some practical work setting up Jenkins…

 

Intro to DevOps – Lesson 2

Topics:

  • How to get Dev and ITOps working together
  • Looking at some tools to enable that integration

Started of looking at the basic conflict between ITOps and Dev. In essence the risk adversity of ITOps, why its there and why its good and bad. First hand experience of this with being woken up many nights after a ‘great’ new release makes this quite pertinent.

  • ITOps needs to run systems that are tested and tightly controlled – This means that when a release is coming that requires new or significantly changed components ITOps needs to be included in discussing and aware so they can ensure stability in Production
  • Dev needs to adopts, trial and use new technologies to create software solutions that meet user and business requirements in a effective manner
  • Performance testing needs to be conducted throughout the development iterations and this is impossible if the development environments are significantly different to production

DevOps-ReleaseFixes
These improvements would make remove most of the real world issues we experience when conducting release deployments.

Performance tests:

How can performance tests be conducted throughout the development of new releases, particularly if these release become more regular?

Proposed answer 1 – is a ‘Golden Image’ – A set image that is used for developing, testing and operating services. This includes Apps, Libs and OS. Docker makes this more practical with containers.

Proposed answer 2 – is to apply configuration management to all machines (not sure how practical this could be).

Practical lab:

Installed Virtualbox, Vagrant, git, ssh, Packer.

Vagrant configures VMs, Packer enabled building of ‘golden images’.

Packer template Variables:

  • Builders take source image.
  • Provisions install and configure software within running machine (shell, chef and puppet scripts).
  • Post processors conduct tasks on images output by builders ie, compress (https://www.packer.io/docs/templates/post-processors.html). These post processors can create VMs for AWS, DigitalOcean, Hyper-V, Parallels, QEMU, VirtualBox, VMware

Lab instructions were pretty straight forward. On to lesson 3.