March 8, 2016
Alan Kay: Simple things should be simple, complex things should be possible
Since Google opened the Kubernetes (k8s) project, it has never been so easy to run your own application in a cloud, public or private. And because it is easy and because it is very useful, we will show you how to do it.
You have the choice of running it on nearly every public cloud, or with minimal changes, in your own private data center.
The first thing you need to address is the architecture of your system. Separate network layer, business logic layer and persistence layer.
Define services for
- firewall and load balancer
- frontend to web and mobile clients
- business application
In a public cloud, the load balancer and standard persistence layers are provided by the cloud, so you can make your life easier by simply using existing solutions.
Ideally, you get rid of time consuming tasks like database and firewall administration. Using the built-in caching solutions (Redis, Memcache,…) you can now focus entirely on your frontend and business application, creating solutions that are not just pretty to look at, but do actually work well. Really well.
Your next step is your application layout.
You should glue together applications which need to be run on the same host for performance reasons or because they share resources in a pod. (A pod is simply a bunch of containers sharing the same fate during their life cycle.)
All its containers are running.
The definition of your pod is easy. You will just need a bunch of json or yaml files defining your pod, while replication controller and service can be used as an interface to your microservices.
In the pod, you can run all processes needed for your service. Even with a single container you can start an entire application cluster.
However, defining a Kubernetes architecture properly, you should separate persistence and stateless services.
To do so, you will need to find the central entry point. The central entry point is your API server, which is governing your worker nodes (also addressed as minions).
The API server is the central access point to manage a cluster. Itself stateless, it stores the data in an etcd cluster. Everything is designed for failover, avoiding single point of failures. If the API server fails, replace it. You can easily switch to a different API server. If an etcd cluster member fails, spawn a new one and let it join the cluster.
Starting a web server is as easy as
kubectl run nginx image=nginx can be. This implicitly creates a replication controller which creates the actual pod running nginx. You should check the state of the
server and the pods.
kubectl get rc,pods CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE nginx nginx nginx run=nginx 1 5m NAME READY STATUS RESTARTS AGE nginx-a8su8 1/1 Running 0 5m
Scaling the commands with
kubectl scale replicas=3 rc nginx we tell the replication controller to create three replicas.
kubectl get rc,pods CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE nginx nginx nginx run=nginx 3 11m NAME READY STATUS RESTARTS AGE nginx-a8su8 1/1 Running 0 11m nginx-rlkfs 1/1 Running 0 4m nginx-sr4k5 1/1 Running 0 4m
Behind the scenes, Kubernetes delegates the task of creating a web server to the nodes attached to the API server.
kubectl get nodes NAME LABELS STATUS AGE 192.168.10.2 kubernetes.io/hostname=192.168.10.2 Ready 3m 192.168.10.3 kubernetes.io/hostname=192.168.10.3 Ready 3m 192.168.10.4 kubernetes.io/hostname=192.168.10.4 Ready 3m
A kubelet controller runs on the nodes, starting and stopping containers. In most cases, the Docker technology is used to pull images and control containers behind the scenes.
The kubectl command has a plethora of options. Once given access to an API server, standard use cases like starting and scaling of servers are a breeze. More complex tasks are possible. However, implementing a Kubernetes cluster is a challenging task: Setting up the network layer, storage and firewall rules dynamically depends on various complex external components you will need to integrate.
Beautiful and useful? Absolutely. What’s next: In our upcoming blog posts we will look behind the scenes and draft the necessary steps to install Kubernetes in a private cloud.