April 2023: Feature releases & highlights

Releases: 23.4.0-23.5.0

April, come she will – When streams are ripe and swelled with rain
May, she will stay – Resting in my arms again Paul Simon

We see increasingly more industrial IoT use cases where connections to individual hosts vary, and MQTT message buses dominate. This has led us to the following significant features for the April releases:

  • Site observability: we have added graphs for metrics related to sites and hosts in the Control Tower user interface
  • Site host proxies: some hosts may not have an internet connection. You can now let any host of a site proxy the other hosts.
  • Custom alerts: as an application developer, you can now add your custom alerts to the built-in alerts topics, which will appear in the User Interface.
  • MQTT bridge: MQTT is a widely spread message bus in the industry. We have added a bridge from our built-in pub/sub bus to MQTT.
  • Predictable IP addresses: slicing a configured pool of site ingress addresses amongst tenants and applications is now possible.

Site observability

We have now added graphs for the site-related metrics in the user interface. This mirrors last month’s releases, where graphs were added for application metrics.

If you select a host, you can expand the window to plot all collected host metrics, as shown below. You can also easily modify the plot interval.

We have also enhanced the overall user experience for inspecting sites:

  • Site labels are always visible at the top.
  • Added columns with static data for the host, such as OS and platform
  • Specific tab to navigate into details for sites and hosts
  • A more precise indication of discovered GPUs and devices in site details

Some example screenshots are shown below, starting with the new default site view. Here you see the site labels, including device and GPU labels.

If you expand the site label section, you will get more info on the labels, like configured rules for devices and GPUs:

You can select and see the host details in a new focused view:

Site host proxies

A unique feature of the Avassa platform is that the hosts on a site form an autonomous self-healing cluster without the need for connectivity to the central Control Tower. But in earlier releases of the Avassa platform, it was assumed that the individual hosts could reach the Control Tower for the initial call-home process and further central configuration and deployment changes. In some deployments, only a subset of the hosts might be able to reach out to the internet (or the private data center). Therefore, we have now introduced a feature to configure specific hosts as site proxies.

In the illustration above only host3 will communicate with Control Tower and host1 and host2 will use host3 as a proxy.

This configuration is done in the supd.conf file on the hosts. The same configuration can be used on all hosts on a site irrespective if the specific host is a proxy or need a proxy. An example is given below:

host-id: "factory-floor-host"
    - api.test.acme.avassa.net
  parent-proxy-call-home-port: 5657
  parent-proxy-api-port: 5656
  parent-proxy-registry-port: 5858
  parent-proxy-volga-port: 5959

In this example, the hosts with internal IP addresses and will act as proxies for all other hosts on the site. Ports are optional and may be omitted, and system default proxy ports will be used.

Read more in our infrastructure networking fundamentals and add a site documentation.

Custom alerts

The Avassa platform has two built-in topics for alerts. Alerts indicate a serious state that needs attention. This is different from events and logs in general, which are used in other resolution and analysis phases to understand what has happened. Alerts must be emitted carefully, assuming that each one will require operational tasks. The two built-in topics are:

The Control Tower has built-in functionality to show these such as the alarm bell top right:

Assume you are an application developer and you have deployed your applications through the Avassa platform and that you would like to generate application-specific alerts. Of course, you could define your own topic in the Avassa pub/sub bus Volga:

$ supctl do --site factory-1 volga create-topic acme-alarms string
$ supctl do --site factory-1 volga topics acme-alarms produce my-alarm-42
$ supctl do --site factory-1 volga topics acme-alarms consume
"time": "2023-05-04T13:43:10.388Z",
"seqno": 1,
"remain": 24,
"producer-name": "REST-api",
"payload": "my-alarm-42",
"mtime": 1683207790388,
"host": "ip-10-20-2-245"

By consuming that topic and feeding it to, for example, the Grafana Alert List you would have your alarm dashboard.

But it would be convenient if your application-specific alarms would show up in the Control Tower User interface. For that purpose, we have now added API calls so that you can publish alarms on the system:alerts topic. In this way, users of the Avassa Control Tower can use the built-in alarm functionality.

To test this new functionality, you can log in to a site with supctl. In the example shown below I have a shell on one of the hosts on the site. Note well that this API or supctl call needs to be done on the site from which the alert is to be generated. (This is also a hint on how to connect the Avassa command line to a site. Very useful if you for example, have a site temporarily disconnected)

$ supctl --host localhost --port 4646 do login joe@example.org
$ supctl do alert unique-id unique-name "my-alarm-42" critical

This will result in the following UI view:

MQTT bridge

The Avassa platform has a built-in edge native pub/sub bus, “Volga.” This is a low foot-print bus with characteristics purpose-built for the edge, such as being resilient to network outages and low bandwidth. A built-in bus also greatly enhances the developer and operations experience across the Avassa functions, one API, common logging, and unified multi-tenancy. However, in many industrial environments, it is a requirement to be able to consume and produce messages over MQTT. Therefore we have now added a MQTT bridge.

This lets an edge application consume MQTT messages on the site and publish MQTT messages to your central MQTT broker.

You deploy the MQTT bridge on your site. The bridge will use the site local Volga as a local cache:

An example Avassa application specification for the bridge is shown below. You can see that it is configured to listen to TCP 1883 and pushes to a central MQTT broker at

name: mq-bridge
  - name: mq
    mode: replicated
    replicas: 1
      - name: config
            - name: config.yaml
              data: |
                  - volga-topic: mqtt-bridge
                    listen-username: test-user
                    listen-password: password
                    upstream-username: test-user
                    upstream-password: password
          - name: tcp
            port-ranges: "1883"
        allow-all: true
      - name: mq-bridge
        image: registry.gitlab.com/avassa-public/mq-bridge/mq-bridge
        approle: mq-bridge
          - volume-name: config
              - name: config.yaml
                mount-path: /config.yaml

Predictable IP address

The Avassa platform has rich support for configuring IP ingress on your site. IP ingress lets clients reach the deployed container applications on the site. The site provider configures ingress options:

  • None: no ingress is allowed
  • DHCP: applications can request an IP and will get that from a DHCP server on the site
  • Pool: the site provider configures a pool of available addresses, and applications will get an address from that pool.

In earlier releases, the applications would get any available address from the configured pool. In some cases, it may be desirable to have more control over which addresses should be available to which tenants or to be able to choose a specific IP address for a specific service. We have added a feature to use labels to define different ranges; both site-wide and per-interface pools.

The below YAML shows an example site configuration using IP Pool ranges/labels.

name: factory-floor-1
type: edge
ingress-allocation-method: pool
  - range:
    network-prefix-length: 24
        - acme
        - edge
      scope: local
  - range:
    network-prefix-length: 24
      tenant: acme
      scope: global
  - range:
    network-prefix-length: 24
      tenant: edge
      dedicated: yes

By default, an application owner tenant can only access ranges with an empty label set. The site provider can create resource profiles with assigned pool ranges to a specific tenant, for example, like shown below:

name: t-acme
  allowed: "tenant = acme or {}"

If this resource profile is assigned to a tenant, that tenant can then pick from IP addresses with the label acme or an empty label set.

When an application requests an IP address from a pool, it is then possible to refer to labels like shown below:

name: with-ingress
  - name: need-ingress
          - name: tcp
            port-ranges: "80,8080"
          - name: udp
            port-ranges: "90"
        match-interface-labels: type = wan
        match-ipv4-range-labels: scope = global

And as before, when you deploy an application that requests ingress IP, you will see the current IP address, and DNS names, as state on the application service.

Try it yourself

Book a demo

Deploy container application across a distributed edge cloud in minutes. Book a demo today to have a closer look at the Avassa platform!

Now off to Edge Computing Expo, baggage loaded with features!


Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.