Telemetry bus at the edge – Part 1: An overview
In order to solve relevant use cases for the edge, Avassa has a built-in edge native telemetry bus; “Volga”. We also sometimes refer to it as the Avassa pub/sub bus.
In this article series, we go deeper in describing the bus; why we built it, and what are the unique features. We illustrate this with some usage examples. In this first part, we’ll look at an overview.
Telemetry bus foundations
In this article series, we will use the terms telemetry and publish/subscribe. Let us first define these:
- Telemetry: is a means to measure and deliver data to the receivers (in contrast to polling).
- Publish and subscribe: is a way to achieve loose coupling in the way that senders of data do not send directly to receivers but instead categorize the data into topics so that consumers can subscribe to the data
What characterizes data at the edge?
For many use cases at the edge, massive amounts of data are generated that need to be filtered, stored, and processed locally in the first step. The days are gone when the edge was just a dumb data forwarder. And it is not just the data from sensors; we need to collect metrics from the edge hosts and the applications (container logs, for example).
The first step is to categorize the data into topics so that edge local and central applications can search historical data and subscribe to live streams.
It is also important to realize that the sheer amount of data implies that pushing everything to the cloud is not realistic due to bandwidth limitations and (cloud) networking costs. Privacy might also require data to stay at the edge.
In the second step, we have clients to the data at the edge; for example, another local application that needs data to perform edge AI inference. Certain data at this second step need to be sent immediately to the cloud, for example, critical alarms.
Thirdly, central applications need to be able to subscribe to certain data across the edges to perform analytical functions. This could be a query across edge sites or a live stream.
And resources are limited at the edge. While we need to keep data local to the edge, we must also consider that memory and disc space are limited, starkly contrasting to cloud data platforms. Connectivity is also an underlying characteristic that greatly impacts the edge data. The connectivity to the central cloud might be down for periods of time, and the bandwidth might be limited. So caching is important, as well as robustness against network issues.
So, in the end, data management for the edge acts as a distributed database where data lives at the edge sites but needs to be efficiently available in central solutions.
Why not just pick an existing telemetry bus?
Why not just throw an existing bus like Pulsar or Kafka into the problem? These are highly capable pub/sub buses, but common for them all is that they are optimized for central use cases. There is no obvious way to run several distributed instances at the edges combined with distributed searches managed from a central end-point. They also have a fairly large footprint and require configuration and management tasks that would create challenges for running at a large set of edge sites.
Avassa is a multi-tenant application-centric platform. The same should go for the telemetry, so you should be able to split your edge resources across tenants, and telemetry bus topics should be part of that split. A unified multi-tenancy model, including a stand-alone bus would be challenging to achieve and manage. You also would like the bus to be aware of applications and deployments. As we will show later in this article, you would like to query for data bound to a certain application deployment for example.
From a usability point of view, you want one API end-point, one command line, and one UI for your complete edge infrastructure and applications. Since we have a built-in telemetry bus, all of that is achieved.
Meet the Avassa edge-native telemetry bus
The Avassa solution deploys a single small container, Edge Enforcer, on each edge host. That single container performs edge-site cluster management, container lifecycle management, secrets management, AND embeds an edge native telemetry bus named Volga. As a user of the Avassa platform, you get a unified API across all feature sets, one single container that is life-cycled managed by Avassa. There is no configuration, installation, or administration tasks for Volga. That is in contrast to if the edge consisted of separate components, including one for the edge bus.
Volga is instantiated at each edge site and in the central Control Tower. All instances have local topics and local storage. If the site consists of several hosts, Volga forms a site-local cluster to enable fault tolerance. This means that the data for a topic is replicated amongst the hosts. The Avassa platform provides built-in topics with telemetry data for hosts, applications, and the Avassa platform itself.
A fundamental feature is that you can post a query/subscribe request centrally in the Control Tower Volga instance that gets automatically distributed to all edge sites, and an aggregated response is sent back.
You should also note that Volga is multi-tenant; topics and data are securely isolated between tenants.
This summarizes the first part of this article series, providing an overview of what an edge-native telemetry bus can look like. If you’d like to learn more, I recommend to continue reading the following parts of the series:
Telemetry bus at the edge – Part 2: Examples
Telemetry bus at the edge – Part 3: Consuming and producing