Why Kubernetes Is Winning?

Kubernetes
Pattern
Pattern
Leonid Mirsky
,
CEO
July 3, 2023

I started working with Puppet while I was still working full time in a regular job.

Puppet was far from ideal, but nonetheless a very powerful tool — it was my first introduction to the ‘infrastructure as code’ idea.

Over the years I became much more proficient in its manifest syntax and could do many exciting things, at least from my perspective.

I was happy with Puppet’s syntax and what I could do with it, however in retrospect, the versatility of its syntax wasn’t the only characteristic that made it popular. The ecosystem around it — with many small open source modules you could download and re-use — was a major part of its success.

A few years forward and I feel the same about Kubernetes and its ecosystem of open-source tools (lets call them addons from now on).

So You Have A Kubernetes Cluster, Now What?

At Opsfleet, we help companies migrate to Kubernetes. Every Kubernetes migration has more or less the same stages — after all the Docker images are ready, we usually spin up a new Kubernetes cluster and start working on the deployment templates.

Once we had a few of these migrations under our belt, I realized that “vanilla” Kubernetes doesn’t meet today’s companies expectations.

Here are some of our customers’ questions which I couldn’t answer with a simple: “YES”!

“Does Kubernetes supports autoscaling? Can I scale my workers based on my overall messages processing rate?”

“How about canary releases? Is that supported in Kubernetes?”

“We need to enable end-to-end encryption for compliance reasons. Can Kubernetes help?“

“We want to start a new environment per each development branch, can Kubernetes assign public DNS records based on dynamically created ingress resources?”

The answer to all these questions is that it’s possible to implement them in Kubernetes, but not out-of-the-box.

Kubernetes provides pretty solid building blocks, that’s true! But its customers want and need more than that to fulfill their development and production environments’ needs.

Addons Ecosystem

With other Docker scheduling platforms, like ECS for example, you have no choice but to roll up your sleeves and code your way out. I’ve done that a few times. Working with AWS’ APIs is always “a pleasure”.

Kubernetes, on the other hand, has a much larger ecosystem of open source addons ready to be installed. The hard part is to find the right tool for the job.

Here’s an example of popular addons, some of which we use very frequently. Just take a look at how much functionality can be unlocked with community tools which aren’t part of Kubernetes’ core:

1. External-Dns + Cert-Manager

Relatively at the same time, at least from what I remember, both Herkou, GitLab and Runnable started to offer a way to create a preview environment for each development branch.

The basic idea is that you don’t need to shout in the room (or Slack): “Who uses staging? I’m taking it now.” Instead, a new environment will be automatically created for you for every new branch you push to your source control system.

Kubernetes has almost everything we might need to dynamically create, and destroy when not needed anymore, a separate environment: namespaces, deployment automation, ingress and more.

To properly deploy a web app in a dynamic environment scenario we’ll usually need to assign a public DNS for each front-facing web component in the environment.

For example, if I just pushed a fix-widget branch and a new environment is created for me to test the results, I would like to be able to browse to fix-widget-app.company.com to interact with the app.

Preferably I would also like to get an SSL certificate assigned to my dynamic endpoints. And if it’s not a self-signed certificate and I can skip accepting the annoying browser warnings — even better!

Both of these functions aren’t part of Kubernetes but can be relatively quickly added by installing the external-dns and cert-manager plugins. I’ll only need to adjust my deployment manifests with a few extra annotations, and we are ready to go.

2. Nginx/ALB Ingress Controllers

When you spin up a new Kubernetes cluster on GKE it will come preinstalled with some useful “addons”. One of these built-in extra functionalities is a tight integration with Google’s L7 Load Balancer. I just need to specify a “public” service or an ingress controller, and Kubernetes will automatically provision a new Load Balancer for me.

Sometimes, however, I don’t want to provision a new cloud Load Balancer, or I might just need a feature that’s currently not supported by Google’s Load Balancer.

For these cases, Kubernetes supports various Load Balancers with addons – most of the time it’s an open-source implementation of an ingress controller such as in a Nginx and ALB examples.

I think it was a good decision to extract all the cloud-specific functionality into separate installable modules instead of adding the code into the Kubernetes’ core. This way I can use multiple ingress controllers in one Kubernetes cluster and be able to update them separately from the Kubernetes’ main components.

3. Cluster-Autoscaler

Autoscaling in Kubernetes is misleading!

When you read the documentation you get a sense that automatically growing or reducing the number of replicas of your application is as easy as adding a simple HorizontalPodAutoscaler resource.

For your newly scaled containers to actually run, however, the Kubernetes cluster scheduler should be able to fit them into one or more available nodes.

So what happens if you don’t have a free spot because all your nodes are working hard? (or just pretend they are!) Your pods will hang in a ‘pending’ state until some unfortunate pod dies and frees up its space.

For autoscaling to truly work in Kubernetes, you need to think of it twice: once in the container/pod level and once in the cluster/node level.

HorizontalPodAutoscaler resources take care of the container/pod level and will add or remove container replicas based on CPU or Memory.

The Cluster-autoscaler addon, however, takes care of the cluster/node level and will add or remove nodes to/from the cluster based on the number of pods that are pending for a free spot and are about to be scheduled.

This is one of the most critical addons and usually requires configurations to align with the applications that will run on the clusters. The fact that the autoscaled executable has more than 100 configuration switches doesn’t make the job easier!

4. Helm

Most Kubernetes users will deploy their application to more than one environment. It’s almost certain that you’ll have at least some sort of pre-production environment where you test before promoting the new version to the users.

Kubernetes CLI, however, allows you to only apply static manifests. If you want to change a few parameters, like the endpoint of your hosted MongoDB database, per each one of your environments, you need to turn your manifest files into templates and write scripts and automation to fill in the values.

When I first discovered Kubernetes, the lack of a templating support in Kubernetes’ CLI actually baffled me a lot – how can it be that it’s such a common scenario but Kubernetes’ CLI doesn’t support a templating language? I even found a few open discussions over at Github which didn’t seem to go anywhere.

After a few iterations running bash and sed and a few other tools I stumbled upon Helm.

Helm is much more than a templating language, but for me, its main feature is that it allows me to specify the templates for all the services I want to manage with Kubernetes in one place — a Helm chart.

Sometimes I’ll also create a higher-level assembly chart that bundles all the microservices in one release. In other cases, just a simple chart-per-repo structure will do the job.

5. Honorable Mention: Istio

There are some tools that everyone is talking about but I still haven’t had the chance to seriously try. Istio is one of them.

The companies we work with have not yet experienced the pains Istio promises to solve. This is partly because, in the early stages of their Kubernetes migration, they are still focussing on nailing down “the basics”.

The reason I decided to mention Istio anyway, is that there are a few use-cases where I do see people start to look at Istio seriously.

One of these use-cases is end-to-end encryption, especially when compliance is such a huge priority for many companies. With Istio, the infrastructure can guarantee that every communication between services in your Kubernetes cluster will always be encrypted.

Another feature I’m asked a lot about is canary releases — the ability to derive a small portion of live traffic to a new version of your app: a canary release. Istio promises to add that functionality to Kubernetes.

Why Kubernetes?

In my experience, the most common reasons companies migrate to Kubernetes are to empower developers to deploy and maintain their code and to make it easy to add a new service to support a growing microservices architecture.

Not everyone has the same problems – sometimes it’s a broken deployment script or a complex configuration management solution that kick-off the initial conversation about Kubernetes. Most of the time, however, the underlying motivations are pretty much the same.

Today, I had a pretty unusual call with a prospective client;

When we reached the point in the conversation where I dug a bit deeper into the motivations of the project, I’ve asked: “Why did you decide to kick-off a migration to Kubernetes? Seems like you’ve got a lot automation already working pretty well in your team”

This time, I got a very interesting reply: “We want to migrate because of the ecosystem. We don’t want to develop from scratch all the features the Kubernetes community already provides.”

It took me a few seconds to think about it – Is that a valid reason to migrate to Kubernetes?

Pattern
Pattern

Pattern
Pattern

Your DevOps Partners

Scaling a cloud-enabled startup requires DevOps expertise. We partner with your engineering team to help you build and scale your cloud infrastructure.

Contact us
Contact us illustration