Categories
Online Courses Scalable Microservices with Kubernetes

Introduction To Microservices and Containers with Docker

After running through some unguided examples of Kubernetes I still don’t feel confident that I am fully grasping the correct ways to leverage the tool. Fortunately there is a course on Udacity that seems to be right on topic…Scalable Microservices with Kubernetes.

The first section, Introduction to Microservices references a number of resources including The Twelve-Factor App which is a nice little manifesto.

The tools used in the course are:

  • Golang – A newish programming language from the creators for C (at Google)
  • Google cloud shell – Temp VM preloaded with the tools need to manage our clusters
  • Docker – to package, distribute, and run our application
  • Kubernetes – to handle management, deployment and scaling of application
  • Google Container Engine – GKE is a hosted Kubernetes service

The Introduction to Microservices lesson goes on to discuss the benefits for microservices and why they are being used (boils down to faster development). The increased requirements for automation with microservices are also highlighted.

We then go on to set up GCE (Google Compute Engine), creating a new Project and enabling the Compute Engine and Container Engine APIs. To manage the Google Cloud Platform project we used the Google Cloud Shell. On the Google Cloud Shell we did some basic testing and installation of GoLang, I am not sure what the point of that was as the Cloud Shell is just a management tool(?).

Next step was a review of

All pretty straight forward — On to Building Containers with Docker.

Now we want to Build, Package, Distribute and Run our code. Creating containers is easy with Docker and that enables us to be more sure about the dependencies and run environment of our microservices.

Part 1 – Spin up a VM:

# set session zone
gcloud config set compute/zone asia-east1-c
# start instance
gcloud compute instances create ubuntu \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160420c
# login
gcloud compute ssh ubuntu 
# note that starting an instance like this make is open to the world on all ports

After demonstrating how difficult it is to run multiple instances/versions of a service on an OS the arguments for containers and the isolation they enable we brought forth. Process(kind of), Package, Network, Namespace etc. A basic Docker demo was then conducted, followed by creating a couple of Dockerfiles, building some images and starting some containers. The images where then pushed to a registry with some discussion on public and private registries.

Leave a Reply

Your email address will not be published. Required fields are marked *