Categories
ITOps

Testing Kubernetes and CoreOS

In the previous post I described some the general direction and ‘wants’ for the next step of our IT Ops, summarised as:

Want Description
Continuous Deployment We need to have more automation and resiliency in our deployment, without adding our own code that needs to be changes when archtecture and service decencies change.
Automation of deployments Deployments, rollbacks, services discovery, easy local deployments for devs
Less time on updates Automation of updates
Reduced dependence on config management (puppet) Reduce number of puppet policies that are applied hosts
Image Management Image management (with immutable post deployment)
Reduce baseline work for IT staff IT staff have low baseline work, more room for initiatives
Reduce hardware footprint There can be no increase in hardware resource requirements (cost).

Start with the basics

Lets start with the simple demo deployment supplied by the CoreOS team.

https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html

That set up was pretty straight forward (as supplied demos usually are).  Simple verification that the k8s components are up and running:

vagrant global-status 
#expected output assuming 1 etcd, 1 k8s controller and 2 k8s worker as defined in config.rb
id name provider state directory
----------------------------------------------------------------------------------------------------------
2146bec e1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
87d498b c1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
46bac62 w1 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant
f05e369 w2 virtualbox running VirtualBox VMs/coreos-kubernetes/multi-node/vagrant

#set kubctl config and context
export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
kubectl config use-context vagrant-multi
kubectl get nodes
#expected output
NAME STATUS AGE
172.17.4.101 Ready,SchedulingDisabled 4m
172.17.4.201 Ready 4m
172.17.4.202 Ready 4m

kubectl cluster-info
#expected output
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running atvagran       

*Note: It can take some time (5 mins or longer if core-os is updating) for the kubernetes cluster to become available. To see status, vagrant ssh c1 (or w1/w2/e1) and run journalctl -f (following service logs).

Accessing the kubernetes dashboard requires tunnelling, which if using the vagrant set up can be accomplished with: https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08 (note, that is for single node, if using multinode then change line 21 to:

vagrant ssh c1 -c "if [ ! -d /home/$USERNAME ]; then sudo useradd $USERNAME -m -s /bin/bash && echo '$USERNAME:$PASSWORD' | sudo chpasswd; fi"

Now the dashboard can be access on http://localhost:9090/.

Now lets to some simple k8s examples:

Create a load balanced nginx deployment:

# create 2 containers from nginx image (docker hub)
kubectl run my-nginx --image=nginx --replicas=2 --port=80
# expose the service to the internet
kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer
# list service nodes
kubectl get po
# show service info
kubectl get service my-nginx
kubectl describe service/my-nginx

First interesting point… with simple deployment above, I have already gone awry. Though I have 2 nginx containers (presumably for redundancy and load balancing), they have both been deployed on the same worker node (host). Lets not get bogged down now — will keep working through examples which probably cover how to ensure redundancy across hosts.


1
2
# Delete the service, removes pods and containers
kubectl delete deployment,service my-nginx

Reviewed config file (pod) options: http://kubernetes.io/docs/user-guide/configuring-containers/

Deploy demo application

https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/guestbook/README.md

  1. create service for redis master, redis slaves and frontent
  2. create a deployment for redis master, redis slaves and frontend

Pretty easy.. now how do we get external traffic to the service? Either NodePort’s, Loadbalancers or ingress resource (?).

Next lets look at how to extend Kubernetes to

Leave a Reply

Your email address will not be published. Required fields are marked *