Avassa End-of-year Summary: Carl’s 2025 Wrap Up

En humoristisk bild på en tortilla-wrap med Carls ansikte och texten 'Carl's 2025 Wrap Up', som illustrerar Avassas årssummering för edge computing.

Five years sit somewhere between that feeling of long and full days, but short years. As a company, we have undertaken many projects, but there’s still a sense that time truly flies when you’re doing interesting things.

We have gone from starting from a new and empty source repository in 2020 (It is still called code by the way) to launching a product, being recognized as a leader in our field, and being relied on by companies with serious business at stake on our platform.

Starting a product company means that you are running headlong towards the following quote from Thomas Edison in 1921:

The value of an idea lies in the using of it.

To me, that quote suggests that coming up with ideas is easy and cheap, but building something that people apply to their real-world challenges is a radically different thing. At this point in time, we have done both as a company. We came across an idea that looks ridiculously simple in hindsight. The core assumption is that lots of operations teams would like the same level of comfortable lifecycle management experience for managing applications at the edge that they are used to in the cloud.

From this fundamental assumption, we had to determine what the real problem is and develop a product to address it. This takes us straight to the next quote, this time from (it is believed) Steve Jobs:

If you define the problem correctly, you almost have the solution.

The challenge here, of course, is in the first half of the sentence. Maybe with some extra focus on the word “correctly”. From the founding of the company, we have been on a journey to build what we believe is a stellar solution to a correctly defined version of the problem.

And since edge computing is still in an early stage, the conversation about defining the correct problem has taken us on a journey through the solution spaces. We have been working hard to answer questions like

While our engineering team has a long background in distributed and fault-tolerant systems, these kinds of questions require some deeper thinking and exploratory development principles.

The exploratory nature of the problem space also means that people naturally tend to apply technologies and experience from other contexts. The most common version of this in edge computing is trying to use Kubernetes all the way to the edge. We ended up not using Kubernetes in our solution based on some early research and trial implementations, where we realized that the operational burden of managing Kubernetes itself is prohibitively high.

While the Kubernetes idea was pretty dominant a couple of years ago, we now find ourselves increasingly involved in conversations about the appropriate runtime for different contexts. Many of them start with a version of this third (and last, I promise) quote. This time from Peter Drucker in an article from 1963.

There is nothing so useless as doing efficiently that which should not be done at all.

A quote that is worth keeping in mind while reading about solutions that make Kubernetes less hard to manage (reduce operational fatigue), instead of focusing on the specific problems at hand.

One recent practical example of this was a Kubernetes podcast on the topic of why Kubernetes is not the solution for edge environments in general, but that “other runtimes” may offer superior qualities. We now have several new users based on the content of that podcast alone.

Another technology that continues to have a massive impact on our customers is AI. Not very unique in general, I know, but several sub-topics are very specific to edge computing. It puts additional pressure on orchestration solutions, including efficient distribution of artifacts (models), distributed developer tooling and telemetry, as well as rapid and efficient upgrade paths for distributed software components. Things we believe we have some opinionated and elegant solutions for.

These are just some of the things we have built and keep building on top of to make the next five years as interesting and exciting as the first.

Glancing forward to what is right ahead of us in 2026, I am particularly excited about the fact that several of our customers are now rapidly scaling out their edge footprint, adding new sites every week, but also increasing the number of applications running on each site. They are turning their distributed low-level infrastructure into a coherent platform. And they are extending the tools and processes they have in place for their clouds to cover this new platform, extracting the value of our initial idea by using it.