OK- now we are getting to the interesting stuff. Given we have a microservices architecture using Docker, how do we effectively operate our service. The services must include production environments, testing, monitoring, scaling etc.
Problems/Challenges with microservices – organisational structure, automation requirements, discovery requirements.
We have seen how to package up a single service but that is a small part of the operating microservices problem. Kubernetes is suggested as a solution for:
- App configuration
- Service Discovery
- Managing updates/Deployments
Create a cluster (ie: CoreOS cluster) and treat is as a single machine.
Into a practical example.
# Initate kubernetes cluster on GCE gcloud container clusters create k0 # Launch a single instance kubectl run nginx --image=nginx:1.10.0 # List pods kubectl get pods # Expose nginx to the world via a load balancer provisioned by GCE kubectl expose deployment nginx --port 80 --type LoadBalancer # List services kubectl get services
Next was a discussion of the Kubernetes components:
- Pods (Containers, volumes, namespace, single ip)
- Monitoring, readiness/health checks
- Configmaps and Secrets
# Initate kubernetes cluster on GCE # create secrets for all files in dir kubectl create secret generic tls-certs --from-file=tls/ # describe secrets you have just created kubectl describe secrets tls-certs # create a configmap kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf # describe the configmap just created kubectl describe configmap nginx-proxy-conf
Now that we have our tls-secrets and nginx-proxy-conf defined in the kubernetes cluster, they must be exposed to the correct pods. This is accomplished within the pod yaml definition:
volumes: - name: "tls-certs" secret: secretName: "tls-certs" - name: "nginx-proxy-conf" configMap: name: "nginx-proxy-conf" items: - key: "proxy.conf" path: "proxy.conf"
In production you will want expose pods using services. Sevices are a persistent endpoint for pods. If pods has a specific label then they will automatically be added to the correct service pool when confirmed alive. There are currently 3 service types:
- cluster ip – internal only
- NodePort – each node gets an external ip that is accessible
- LoadBalance – A load balancer from the cloud service provider (GCE and AWS(?) only)
Accessing a service using NodePort:
# Initate kubernetes cluster on GCE # create a service kubectl create -f ./services/monolith.yaml kind: Service apiVersion: v1 metadata: name: "monolith" spec: selector: app: "monolith" secure: "enabled" ports: - protocol: "TCP" port: 443 targetPort: 443 nodePort: 31000 type: NodePort # open the nodePort port to the world on all cluster nodes gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000 # list external ip of compute nodes gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-k0-default-pool-0bcbb955-32j6 asia-east1-c n1-standard-1 10.140.0.4 184.108.40.206 RUNNING gke-k0-default-pool-0bcbb955-7ebn asia-east1-c n1-standard-1 10.140.0.3 220.127.116.11 RUNNING gke-k0-default-pool-0bcbb955-h7ss asia-east1-c n1-standard-1 10.140.0.2 18.104.22.168 RUNNING
Now any request to those EXTERNAL_IPs on port 31000 will be routed to pods that have label “app=monolith,secure=enabled” (as defined in the service yaml)
# get pods meeting service label definition kubectl get pods -l "app=monolith,secure=enabled" kubectl describe pods secure-monolith
Okay – so that, like the unguided demo I worked through previously was very light on. I am still not clear on how I would many a microservices application using the kubernetes tool. How do I do deployments, how to I monitor and alert, how do I load balance (if not in google cloud), how to I do service discovery/enrollment. Theres one more lesson to go in the course, so hopefully “Deploying Microservices” is more illuminating.