May 19

Kubernetes: Extending Tools To Build New Things

On the Operations Team here at AerisWeather, we use Kubernetes to orchestrate our containerized applications and previously wrote about how much we love using it. Kubernetes and it’s CLI tool, kubectl, help deploy applications to our production environment in a consistent and reproducible manner. We can update high throughput applications in place with rolling updates, view cluster-wide metrics, and see what’s running where. Component integration in a large microservice application can be daunting, but we are able deploy our application with the help of Kubernetes. Emphasis on the “help”. We found ourselves not only integrating parts of our application together, but also parts of our DevOps toolkit.

Welded wrench dinosaur sculpture, just like using Kubernetes kubectl in kubejs

A fun repurposing of an old wrench, by Tadpole Creek Creations

Current Tools, New Needs

Great software has a focused goal. We strongly believe that and build our services to help our customers achieve their goals easier and more efficiently. Kubernetes is focused on “deployment, operations, and scaling of containerized applications.” While a good, focused goal, that isn’t the silver bullet solution we needed in managing an Amazon Web Services (AWS) environment hosting our containers. Great software is also extensible, and Kubernetes is wonderfully extensible.

Application-Specific Configuration

A tool as powerful as Kubernetes has a large amount of configuration and uses various configuration files when creating it’s resources. While powerful, this configuration can be arduous to create and maintain, which is where helper scripts come into play. We started with some pre-formatted YAML, sprinkled in a few templated variables, populated those variables with some of our own structured config, rendered it out a few times, and we were up and running. Unfortunately, things change. A few hack updates later and we had a working system again. Repeat that process several times and we quickly learned we needed something more robust.

We need to deploy our applications in several different states of development to different clusters in our data centers, which is a lot of configuration and needed a tool to help. After making some data mappers that make some assumptions along the way, we found that our solution is working well for us: our own application-specific JSON config, mapped to Kubernetes Replication Controllers and Service JSON with a nice script that automates rolling updates for us. Unfortunately it’s still very specific to our use case and is unclear how much of this can be used in the future. That might be the nature of what we’re trying to do, however. Going from a compact form to a larger form will always require some assumptions, and in this case, those assumptions are specific to our applications. We will continue to wrestle with this and pull out as much as we can. As with any tool that makes automation easier, we will release it on our GitHub page if we find something of value in there others might be able to use.

We did learn a few best practices along the way:

  • Every container should be a Pod defined in a Replication Controller – We found this out very early on. Creating a Replication Controller gives your container the ability to auto-restart and scale. On the loss of a Node, the container will be restarted on another Node. Replication Controllers essentially wrap containers (as Pods) and provide a lot of benefits.
  • Replication Controllers should be versioned – We append versions (like my-component-v123) to the end of our Replication Controller names. This allows us to quickly see which versions of containers we are running, as well as rolling updates to new versions. Rolling updates need unique Replication Controller names.
  • Services should point at roles, not unique versions – Services should not be updated when the application updates. Some Services may have external load balancers attached to them and it gets really messy to move those around on a live application. Services point at Pods, so when creating Replication Controllers ensure the Pods have a “role” assigned to them. Then, when creating a Service, the selector should be role based. This will allow continued service through a rolling update.

Extending Kubernetes – kubejs

A Kubernetes cluster is just that – a cluster. There isn’t really anything magical about a Kubernetes cluster. They are still virtual machines running somewhere–AWS in our case. These servers sometimes need maintenance or we need to move one, and so on. kubectl doesn’t really provide user-friendly access to managing these Nodes.

We created a small CLI tool, kubejs, to aid in the more advanced tasks the kubectl didn’t provide yet. We can easily evict a Node, redistribute pods, decommission a node for maintenance, and even scale up an autoscaling group to add a node to the cluster. kubejs started out as a small Node.js SDK that grew into a really handy CLI tool. It integrates with cAdvisor and can quickly get stats like memory and CPU usage per pod. For example, to decommission a node with kubectl:

(You’ll notice I left a lot of that to the imagination/reader exercise because it’s a giant pain)

vs kubejs:

Development of kubejs

We love that the Kubernetes team is adding more and more features to their tools. It’s great to see so many updates to a great platform, but we needed a few specific things for our use case. kubectl has many new features coming that are currently in alpha. We’re looking forward to those and expanding kubejs in creative ways that fill niches too small for core Kubernetes. We have high hopes of building a web GUI on top kubejs and will hopefully get there eventually, but for now it exists as a very basic SDK and a really hand CLI tool.

kubejs is under active development and available on GitHub

Leave a Reply

Your email address will not be published. Required fields are marked *