Categories
Online Courses Scalable Microservices with Kubernetes

Deploying Microservices

So far the Kubernetes examples have been little more than what could be accomplished with Bash, Docker and Jenkins. No we shall look at how Kubernetes can be used for more effective management of application deployment and configuration. Enter Desired State.

Deployments are used to define our desired state then work with replica controllers to ensure desired state is met. A deployment is an abstraction from a pods.

Services are used to group pods and provide an interface to them.

Scaling is up next. Using the deployments configuration file updating the replicas field and running kubectl apply -f <file> is all that needs to be done! Well its not quite that simple. That scales the number of replica pods deployed to our kubernetes cluster. It does not change the amount of machine (VM/Physical) resources in the cluster. So… I would not really call this scaling :(.

Onto updating (patching, new version etc). There are two types of deployments, rollout and blue-green. Rollouts can be conducted by updating the deployment config (container->image) reference then running kubectl apply -f. This will automatically conduct a staged rollout.

OK so that’s the end of the course – it was not very deep, and did not cover anything like dealing with persistent layers. Nonetheless it was good to review the basics. Next step is to understand the architecture or our my application running on Kubernetes in AWS.

At first I read a number of threads stating that kubernetes does not support cross availability zone clusters in AWS. Cross availabilty zone clusters are supported on AWS: Kube-aws supports “spreading” a cluster across any number of Availability Zones in a given region. https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-render.html. With that in mind the following architecture is what I will be moving to:

kubernetes high level architecture
Kubernetes high level architecture

Instead of HA Proxy I will stick with out existing NGINX reverse proxy.

Categories
Online Courses Scalable Microservices with Kubernetes

Introduction To Microservices and Containers with Docker

After running through some unguided examples of Kubernetes I still don’t feel confident that I am fully grasping the correct ways to leverage the tool. Fortunately there is a course on Udacity that seems to be right on topic…Scalable Microservices with Kubernetes.

The first section, Introduction to Microservices references a number of resources including The Twelve-Factor App which is a nice little manifesto.

The tools used in the course are:

  • Golang – A newish programming language from the creators for C (at Google)
  • Google cloud shell – Temp VM preloaded with the tools need to manage our clusters
  • Docker – to package, distribute, and run our application
  • Kubernetes – to handle management, deployment and scaling of application
  • Google Container Engine – GKE is a hosted Kubernetes service

The Introduction to Microservices lesson goes on to discuss the benefits for microservices and why they are being used (boils down to faster development). The increased requirements for automation with microservices are also highlighted.

We then go on to set up GCE (Google Compute Engine), creating a new Project and enabling the Compute Engine and Container Engine APIs. To manage the Google Cloud Platform project we used the Google Cloud Shell. On the Google Cloud Shell we did some basic testing and installation of GoLang, I am not sure what the point of that was as the Cloud Shell is just a management tool(?).

Next step was a review of

All pretty straight forward — On to Building Containers with Docker.

Now we want to Build, Package, Distribute and Run our code. Creating containers is easy with Docker and that enables us to be more sure about the dependencies and run environment of our microservices.

Part 1 – Spin up a VM:

# set session zone
gcloud config set compute/zone asia-east1-c
# start instance
gcloud compute instances create ubuntu \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160420c
# login
gcloud compute ssh ubuntu 
# note that starting an instance like this make is open to the world on all ports

After demonstrating how difficult it is to run multiple instances/versions of a service on an OS the arguments for containers and the isolation they enable we brought forth. Process(kind of), Package, Network, Namespace etc. A basic Docker demo was then conducted, followed by creating a couple of Dockerfiles, building some images and starting some containers. The images where then pushed to a registry with some discussion on public and private registries.

Categories
Online Courses Scalable Microservices with Kubernetes

Intro to Kubernetes

OK- now we are getting to the interesting stuff. Given we have a microservices architecture using Docker, how do we effectively operate our service. The services must include production environments, testing, monitoring, scaling etc.

 Problems/Challenges with microservices – organisational structure, automation requirements, discovery requirements.

We have seen how to package up a single service but that is a small part of the operating microservices problem. Kubernetes is suggested as a solution for:

  • App configuration
  • Service Discovery
  • Managing updates/Deployments
  • Monitoring

Create a cluster (ie: CoreOS cluster) and treat is as a single machine.

Into a practical example.

# Initate kubernetes cluster on GCE
gcloud container clusters create k0
# Launch a single instance
kubectl run nginx --image=nginx:1.10.0
# List pods
kubectl get pods
# Expose nginx to the world via a load balancer provisioned by GCE
kubectl expose deployment nginx --port 80 --type LoadBalancer
# List services
kubectl get services

Kubernetes cheat sheet

Next was a discussion of the Kubernetes components:

  • Pods (Containers, volumes, namespace, single ip)
  • Monitoring, readiness/health checks
  • Configmaps and Secrets
  • Services
  • Lables

Creating secrets:

# Initate kubernetes cluster on GCE
# create secrets for all files in dir
kubectl create secret generic tls-certs --from-file=tls/
# describe secrets you have just created
kubectl describe secrets tls-certs
# create a configmap
kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf
# describe the configmap just created
kubectl describe configmap nginx-proxy-conf

Now that we have our tls-secrets and nginx-proxy-conf defined in the kubernetes cluster, they must be exposed to the correct pods. This is accomplished within the pod yaml definition:

volumes:
    - name: "tls-certs"
      secret:
        secretName: "tls-certs"
    - name: "nginx-proxy-conf"
      configMap:
        name: "nginx-proxy-conf"
        items:
          - key: "proxy.conf"
            path: "proxy.conf"

In production you will want expose pods using services. Sevices are a persistent endpoint for pods. If pods has a specific label then they will automatically be added to the correct service pool when confirmed alive. There are currently 3 service types:

    • cluster ip – internal only
    • NodePort – each node gets an external ip that is accessible
    • LoadBalance – A load balancer from the cloud service provider (GCE and AWS(?) only)

Accessing a service using NodePort:

# Initate kubernetes cluster on GCE
# create a service
kubectl create -f ./services/monolith.yaml
kind: Service
apiVersion: v1
metadata:
  name: "monolith"
spec:
  selector:
    app: "monolith"
    secure: "enabled"
  ports:
    - protocol: "TCP"
      port: 443
      targetPort: 443
      nodePort: 31000
  type: NodePort
# open the nodePort port to the world on all cluster nodes
gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000
# list external ip of compute nodes
gcloud compute instances list
NAME                               ZONE          MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-k0-default-pool-0bcbb955-32j6  asia-east1-c  n1-standard-1               10.140.0.4   104.199.198.133  RUNNING
gke-k0-default-pool-0bcbb955-7ebn  asia-east1-c  n1-standard-1               10.140.0.3   104.199.150.12   RUNNING
gke-k0-default-pool-0bcbb955-h7ss  asia-east1-c  n1-standard-1               10.140.0.2   104.155.208.48   RUNNING

Now any request to those EXTERNAL_IPs on port 31000 will be routed to pods that have label “app=monolith,secure=enabled” (as defined in the service yaml)

# get pods meeting service label definition
kubectl get pods -l "app=monolith,secure=enabled"
kubectl describe pods secure-monolith

Okay – so that, like the unguided demo I worked through previously was very light on. I am still not clear on how I would many a microservices application using the kubernetes tool. How do I do deployments, how to I monitor and alert, how do I load balance (if not in google cloud), how to I do service discovery/enrollment. Theres one more lesson to go in the course, so hopefully “Deploying Microservices” is more illuminating.