August 11, 2022

The Rise of Kubernetes Part 3: Containers

Michael Sklyar
VP of R&D & Co-Founder

TLDR: To start, this is not going to be one of those traditional “Containers vs VMs” conversations. This series is more of a first-hand account of the evolution of application infrastructures from the operations team’s point of view; from physical servers all the way to the rise of Kubernetes.

The rise in popularity of Kubernetes is a tale of overcoming the operational complexities of scaling application infrastructure to support the growing demand for applications and services.

I like to think of it as a story of abstraction, in which we have added flexibility and scalability by subtracting dependencies that slowed operations. We still have not removed all the complexities. Hell, you could easily argue things got more complex during this evolution, but this progression has driven results that have changed the way technology impacts the world we live in today.

Let’s dive deeper into what this means by taking you through my accounts of moving from manually configuring servers to managing at-scale DevOps operations.

Part 1: Physical Servers

Part 2: Virtual Machines

Part 3: Containers

Part 3: Containers

The next revolution came when it become possible to run processes in dedicated containers. These containers only required a thin OS level to access the hosting OS drivers.  Docker let you package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer.

As opposed to VMs, which are an abstraction of the OS and processes from a physical server, Containers are the processes abstracted from both the OS and physical servers.

However, it is important to remember the golden rule from Docker Docs: Separate areas of concern by using one service per container. In other words - only one process per container.

The benefits for containers are huge, for starters, no more dependencies. This enables the application to run in a variety of locations, such as on-premises, in a public or private cloud. Also, the container’s stateless images are well defined and fulfill the “runs on my laptop => runs anywhere” promise. (No more excuses folks!)

Containers usually start much faster than a VM (depending on your app), they are lightweight, and they allow application infrastructure to be more flexible and agile to better support CI/CD and DevOps practices. Additionally, upgrades are a breeze and we can always revert to the previous version if needed.

Tools like Docker Compose allow us to orchestrate multiple containers. It supports health checks, can restart services, run multiple instances, set up a shared network between the services, and provides a rich set of tools to manage containers' lifecycle. (All the things I ever dreamed of!)

The deployment is defined in a simple YAML format that’s pretty damn powerful.  Unfortunately, all this magic happens on a single server. What can we do with a single server? A lot of things, but production isn’t one of them. A standard, modern, highly available (multi-zone), horizontally scalable application architecture by definition requires at least two servers.

To solve this limitation, we use Container / Cluster orchestration frameworks to run containers (cgroups, isolation, lightweight) on a cluster of servers. Out of three main orchestration frameworks (Docker Swarm, Mesos and Kubernetes), K8s reigned supreme and today is the standard container orchestration framework for literally every industry. From Education to farming to F-16s - you name it, Kubernetes is there.

Kubernetes

Kubernetes makes it easy to manage a cluster of nodes and bunches of processes (containers).

Hurray, we solved all the problems - we decoupled everything, and there are well-defined service discovery, monitoring, logging, load balancing, and role-based access capabilities. Failed services are restarted. Pod autoscaler will scale our services instances up and down, cluster autoscaler can even scale the cluster itself by releasing or allocating node instances.

I will ask it to “Schedule my workload, I want 15 instances”. K8s will take it from there and host them on different nodes. I really don’t care which VM my pod (service instance) is running, but I do care about the logical grouping of my services in namespaces and their scale. Everything seems so simple!

Well, that is not entirely as it seems. Application infrastructures built with Kubernetes have many benefits over physical servers and VM, but they come with their fair share of complexity. Kubernetes is a colossal beast, and you need to understand many (if not all) of its concepts to sustain a stable and resilient environment. “Day two Kubernetes” is when many DevOps discover a wide range of issues. For example:

  1. Pods are crashing (OOMs)
  2. Pods are evicted or throttled.
  3. Services are underperforming
  4. Services are throttled even though they don’t seem to be utilizing the requested resources.
  5. Scale-ups and failovers take forever
  6. The whole thing is still expensive and resources are wasted (It is rare to find a cluster in the wild that is over 60% utilized!)

The future is bright, but not perfect, yet. PerfectScale aims to help you solve your “day two Kubernetes” (a.k.a real life) challenges. Follow us as we take you on a journey of Kubernetes scheduling mechanics, best practices and much more.

PerfectScale Lettermark

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Subscribe to our newsletter
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.