Avassa’s 2023 Holiday Crackers and Key Takeaways

Edge computing as a generic concept can be argued to be at least as old as the cash register, but the last couple of years have seen a radical uptake in thinking about edge computing as much more than just clever machines running in many locations.

The compute power and price points of small form factor servers is now enough to run full-blown operating systems in very distributed locations. This means that the “platformization” of on-site edge environments is now in full swing.

We see a wide variety of devices that used to be thought of as embedded systems, like high-end security cameras, ethernet switches, and CAN-bus controllers, now running mainstream Linux distributions with room to spare for applications. This means that the industry can start applying the talent, tools, and processes already in use for more centralized environments but to the edge. Gone are the days when the edge consisted of hyper-siloed solutions built on embedded operating systems hard-coded into vendor-specific operation stacks. The edge becomes a natural extension to the central paradigms, and the barrier to using it can be significantly reduced.

Containers ❤️ the edge

Another side effect of the above is that we can experiment with applying technologies from the cloud context directly to the edge context. Some of them will simply work seamlessly, while others may have assumptions from the cloud paradigm built into them that will limit their applicability.

One of the technologies that we see cleanly applies across the board is containers. And the finer point here is that the container technology ecosystem is not limited to just a universal packaging format, even though that is valuable, but also features for efficient fetching, starting, and stopping the constituent parts of multi-component applications. It provides means for standardized logging, observability and health monitoring. It gives us most, if not all of, the features required for scalable lifecycle management on pared-down compute nodes across many locations.

It also means that there are many tools (and lots of experience with them) that can and should be reused when planning for the edge. Some things are markedly different at the edge, e.g. the lack of perimeter security and the fact that resources are somewhat more scarce than in the central clouds needs to be taken into account and be “shifted left” into the developer toolchain. But apart from that we have seen teams move container workloads from the cloud to the edge, fully managed and monitored, in a matter of days.

The rise of Podman

When we founded Avassa in 2020, most people we talked to used the word “Docker” as a synonym to “container”. As the design of our platform was hammered out and we built towards a first launchable version, we decided to use Docker as the default container runtime for the distributed environment, and to integrate our agent (the Edge Enforcer) with the APIs of the Docker daemon for application scheduling and some networking constructs. We kept close to Docker, including me presenting virtually at DockerCon in 2022 on the topic of “Containers on Few Computers but in Very Many Places”.

As time went by, we started observing how users were increasingly interested in exploring Red Hat’s Podman as an alternative for production deployments in edge environments. It seems mostly related to two things:

  • The fact that Docker are focusing on developer experience rather than production environments. I.e. the focus of Docker Engine is to be an excellent substrate under Docker Desktop, and most feature development will be aimed in that direction
  • Podman is focused on things like “Fast and light” with their daemonless architecture, “Secure” with rootless containers, and “Compatible” with strict adherence to the OCI specifications. This seemed to resonate with the user base looking not only for developer environment support, but a robust substrate for their distributed edge production environments

2023 was the year when we saw Podman become a serious and equivalent alternative to Docker for edge environments. After implementing support for Podman as a part of our partnership with Red Hat we are building experience around the benefits of its architecture. And we saw the first RFI that pointed out a preference for Podman over Docker.

Kubernetes still isn’t right for the edge

You don’t have to be following the Memenetes X/Twitter account to catch that there is a more nuanced view around Kubernetes growing in the industry post peak hype which I approximate to have been during early 2023. I believe we are now slowly moving from a default of “Kubernetes everywhere and at all times, prove me wrong”-position towards appreciating the fundamental assumptions of Kubernetes and how that maps to fairly specific problem spaces. This may also allow us to avoid calling things Kubernetes that simply aren’t.

The rate of conversations I’m now having with teams that have tried using Kubernetes as a container orchestrator for on-site edge environments and that are now looking for alternatives is growing. It is of course far easier to have conversations about appropriate abstractions with teams that have experience from trying, than starry eyed but mostly unscarred teams with serious symmetrophilia in the tooling domain.

Keep reading: Avassa and Kubernetes at the edge

The growing body of experience from failure, and the emergence of very solid architectural alternatives like Podman paired with an immutable Linux distribution like RHEL for Edge from Red Hat will make next year very interesting in the realm of tooling choice for the edge.

We are at the center of edge terminology

Language is tricky in general and naming things is specifically and famously one of two hard things in computer science according to the late Phil Karlton. The language around edge computing has been ever shifting and in need of some sort of stability so we can start understanding each other. Many have been called, but few seems to be chosen, and I suggest that one of the few is Gartner. In order for them to publish some type of coherent analysis, they need a relatively stable taxonomy. And they seem to have settled on one with their Hype Cycle series of publications for Edge Computing.

We have been mentioned twice as sample vendor associated with two separate terms in this years’ Hype Cycle:

Firstly, they have picked the term Edge Management and Orchestration (popculturally insensitive shortened to EMO) for the part of the solution stack that corresponds to our architecture to

[…] provide layers of control over server and device management, network and security management, the infrastructure software stack and the applications themselves.”

Hype Cycle for Edge Computing, Gartner 2023

And it seems to have at least partially stuck. I have used it unprompted in many conversations and adjusting for friendly but shifty-eyes nodding and I predict that it will have a high rate of success.

The second term that we are associated with is Edge PaaS that is somewhat more loosely defined as:

[…] a type of cloud-oriented application platform that is purpose-built for capacity- constrained environments at the network edge.

Hype Cycle for Edge Computing, Gartner 2023

What I like about it is that it provides some increased focus on the platform-aspect of this emerging architecture. I.e. the fact that any solution with the ambition to provide a “cloud-like” experience for application teams need to take into account a number of aspects above and beyond starting and stopping singular containers, but need to include features like integrated security measures to protect data in flight and at rest, secrets management, monitoring and observability.

We like the signs of convergence here, and are proud to be mentioned twice.

Avassa won’t be keeping Santa busy this year

2023 has been a huge success for us in terms of feature release cadence. There are several really important items in the requirement specification for an efficient, robust and secure Edge Platform, and we won’t be needing any of them on the wish list this year. Catch up on our feature releases and monthly highlights in our report library.

Moving into 2024

As we’re moving towards the end of a year and the beginning of a new one, I can’t wait to see what this fast-moving beast of an industry has to offer. From Avassa’s side, we are excited to land in 2024 running, with several stellar features lined up in our product roadmap, a growing user base, and several exciting events to attend. First up is NRF in January, so if you’re on site, make sure to find us in booth 954.

But first, have a fantastic and of year and restful holidays!

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.