How we ended up not using Kubernetes in our edge platform
The emergence of cloud computing was predicated on the presence of flexible application platforms in centralized locations providing massive amounts of computing and storage. The cloud computing operational model paved the way for a radical rethinking of how teams organized themselves around how to best develop and operate applications, i.e. the DevOps movement. As DevOps principles started forming, the tools to support them throughout the application lifecycle soon followed.
After their cloud journey, enterprises are now turning their interest towards running an appropriate subset of their business-critical applications at the edge for reasons including resilient operations, demands on data privacy, and the need for predictable application performance.
The fact that the edge is different became evident as application teams tried to apply their current practices and tools to environments where applications are expected to run in many locations with limited resources in each location. There is also a lack of robust features to support edge-specific features like geographically constrained placement, monitoring and observability of applications distributed across hundreds of locations, and data security in environments where physical theft is real.
Realizing that we need to be able to connect the promises of the user interfaces (CLI, REST API, Web UI) with the features of the edge application runtime environment we started prototyping with various container runtimes for the edge hosts. We worked our way through many of them including Docker Swarm, Apache Mesos, etc. But even in 2019, it was obvious that Kubernetes had dominant mind share as we talked to users. It was built on familiarity from the well-deserved success in providing the dominant abstraction for managing containers in clusters. People were assuming that Kubernetes would ājust workā at the edge. And so did we. Kind of.
By starting from the first principle that multi-container applications and their precise placement should be declaratively defined while leveraging the pragmatic strength of the container ecosystem, we have built a system with a fair balance between ease of use and flexibility.
As we worked our way through prototyping we gradually realized that since Kubernetes was not designed for very distributed environments in the first place, making robust mappings between the APIs we wanted to provide centrally, and the APIs provided by Kubernetes in the distributed clusters was a struggle. Adding to that we needed a site-local container runtime that had very low operational overhead, and that could be configured and upgraded from a central location. We needed the site-local clusters to be cattle, not pets.
Yes, itās software. Yes, you can make it work differently. It depends on how hard you want to fight the soul of the project and how hard you want to try to push that round peg into that square hole.
ā Me, on Kubernetes for the edge, during a Fierce Telecom Webinar called āThe rise of Kubernetes at the edgeā in January 2022
When we realized that maybe Kubernetes was not the way forward for us, we started talking to users and people in the industry that we know to have extensive experience with designing and operating Kubernetes at scale. We are all experienced enough to understand that technical excellence at times takes a back seat to more pragmatic values.
Instead, we set out to build an opinionated system that would hide the internal complexities of a container application management system that scales to many thousands of sites. We wanted the APIs (and the UI and the CLI) to mainly surface configuration and state directly related to the applications and minimize the need to care for infrastructure-specific details. Or as an early user told us during their trials: āI simply want to run my containersā.
We focus very much on providing the best possible support for teams that want to manage container-based applications on their edge infrastructure. And we have obsessed over how to provide the most comfortable and efficient abstractions for managing (placing, lifecycle, monitoring, observing, and securing) container-based applications when compute hosts are in many places, with a few of them in each.
We were quite surprised to hear how open they all were for non-Kubernetes solutions in the distributed edge case. The most common line of reasoning revolved around the fact that the operational burden of managing Kubernetes itself is high. Add to that a handful or two of additional software components needed alongside it (think event logging, service mesh, policy management, etc) and it was described to us as a non-starter.
The combination of our prototyping efforts and the validation from people we trust gave us the foundation to decide to stick to basing our edge host runtime on the container runtime and not Kubernetes. It has allowed us to build a very easy-to-use, secure, and scalable PaaS that is delighting teams that focus on applications.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.