September 2022: Feature releases & highlights

The falling leaves
Drift by my window
The autumn leaves
Of red and gold

Music: Joseph Kosma, Lyrics Jacques PrƩvert, Johnny Mercer

And we are happy to announce three new golden features falling out of the Avassa platform:

  1. Device (/dev) discovery: many edge scenarios include devices such as cameras and sensors connected to the edge hosts. Avassa now supports device discovery and labeling so that applications can be scheduled based on the availability of a specific type of device.
  2. Site and host observability: as a site provider, you will now get edge site and host health insights
  3. Persistent volumes: Avassa now supports two kinds of storage volumes for edge applications: ephemeral and persistent (new feature). Ephemeral volumes stayed as long as the container application was deployed on a site. We have now added support for persistent volumes so that a later deployment can pick up the data from a previous application. This can be useful for database applications, for example.

Device discovery

Many edge use cases require the edge application to mount a device into the application, for example, a camera. As a site provider, you would like to provide labels for specific devices, and as an application developer, you would like to control the placement of your application on sites and hosts where the required device exists.

The standard way of managing devices in Linux is udev. udev comes with a collection of rules that describes to the device manager how devices are mapped and named.

To list devices on a Linux host, you can use udevadm, e.g.

$ udevadm info -e
...

P: /devices/platform/serial8250
E: DEVPATH=/devices/platform/serial8250
E: DRIVER=serial8250
E: MODALIAS=platform:serial8250
E: SUBSYSTEM=platform

You can then request further information for a specific device

$ udevadm info -ap /devices/platform/serial8250/tty/ttyS1

This will list key-value pairs for the device (ttyS1) and its parent devices (serial8250, platform):

looking at device '/devices/platform/serial8250/tty/ttyS1':
    KERNEL=="ttyS1"
    SUBSYSTEM=="tty"
    DRIVER==""
    ATTR{iomem_reg_shift}=="0"
    ATTR{console}=="N"
    ATTR{line}=="1"
 ...

  looking at parent device '/devices/platform/serial8250':
    KERNELS=="serial8250"
    SUBSYSTEMS=="platform"
    DRIVERS=="serial8250"
    ATTRS{driver_override}=="(null)"

...

With the above as a udev introduction, we can now briefly describe the process of using devices in Avassa.

  1. Site provider: define device rules; these rules define which devices to search for and which results in device labels used for placement.
  2. Application owner: specify device labels matching the devices your application requires and mount those in your application container.

Device rules

You configure device rules on sites. The Edge Enforcer will then evaluate the rules on each host and assign labels both at the host and the site level. In this way, the site provider will see details regarding discovered devices for each host, while the application owner can use site device labels to control the application scheduling.

The below supctl command shows an example device rule which will create a device label tty matching a tty serial device with driver serial8250:

$ supctl show system sites my-site1 device-labels
- label: tty  
  udev-patterns:    
  - SUBSYSTEM=="tty", DRIVERS=="serial8250", ATTR{console}=="N"

As a site provider you will now be able to see the result of this rules configuration on a specific site:

$ supctl show --site my-site1 system cluster hosts
- cluster-hostname: my-site1-003  
  ...
  device-labels:    
  tty:      
  - /dev/ttyS1      
  - /dev/ttyS2      
  - /dev/ttyS3    
  ...

The above output shows that the Edge Enforcer discovered three devices for the tty label on the host my-site1-003.

If you look at your assigned sites as an application owner you will see the following:

$ supctl show --site my-site1 assigned-sites my-site
name: my-site1:
type: edge
labels:
  system/type: edge
  system/name: my-site1
host-labels: {}
volume-labels: {}
device-labels:
  tty:    
- /dev/ttyS1    
- /dev/ttyS2    
- /dev/ttyS3

This illustrates the vital concept of labels as a contract from the site provider to the application owners. As an application owner, you can see the features a site provides through labels without the details on exact hosts.

The site provider can also control access to devices by using resource profiles. See more in the Avassa documentation for device discovery.

Mounting devices in your application

To mount a device in your container you just specify the required device label as part of your application specification. The alpine example below requires a device matching the tty label to start.

name: alpine
version: "1.0"
services:
  - name: my-service
    containers:
      - name: alpine
        ...
        devices:
          device-labels:
            - tty

Assume my-site1 has a host with a tty serial device but not my-site2. If we now deploy this application to my-site1 and my-site2 we will see the following:

$ supctl show --site mysite1 applications alpine service-instances
- name: my-service-1
  application-version: "1.0"
  oper-status: running
  ready: true
  host: my-site1-002
  application-network:
    ips:
      - 172.26.0.1/16
  gateway-network:
    ips:
      - 172.25.255.2/24
  ingress:
    ips: []
  containers:
    - name: alpine
      id: a4c15b840f4f
      oper-status: running
      ready: true
			...
      devices:
        - /dev/ttyS1
        - /dev/ttyS2
        - /dev/ttyS3

We can see that the service instance my-service-1 has mounted the tty devices. However if we look at my-site2 we can see that the application failed to start since no required devices where available on any of the hosts on that site.

$ supctl show --site my-site2 applications alpine service-instances
- name: my-service-1
  oper-status: not-scheduled
  not-scheduled-reason: no-device-label-match
  error-message: all required devices not present on node

You can read more about device discovery in our documentation.

Site and host observability

In our previous release highlights during summer, we introduced the new application observability features. Now it is time for site and host observability.

And…have you ever experienced the blame game between application and infrastructure teams? You read correctly with the Avassa platform; you will now have visibility from the edge hosts all the way up to the applications. Infrastructure and application teams can meet and resolve issues faster. Dependencies between infrastructure and applications are easily spotted.

Short refresh of the application observability follows. If you are an application owner you will immediately be notified if your application has an issue. In the screen shot below the application is not healthy since the only replica running on a site has a failing readiness probe.

But assume there was an underlying issue with the hosts on the site. As you can see in the screen-shot above you can see that the service is running on host at-home-001.

With the newly introduced site observability, a site provider will be notified whenever an issue exists on a site and host. It also provides drill-down functions for trouble-shooting.

To the left, you see a list of all your sites, including the overall health state for the site. In order to prioritize the work you will also know if there are any tenants assigned to the site as well as if any applications are running. If you select the site you will get a list of all hosts and their health.

The screen shot below illustrates the case where there is a disc issue on one of the hosts.

We have abstracted the health state of a host to cover common issues out of the box. So rather than providing a bag of metrics and leaving it to the ops team to set thresholds, we have decided to provide you with our best practices pre-baked. This will simplify and speed up the time it will take for you to take your edge solutions into operations.

An important note is that all of this is available over the Avassa APIs so you can integrate to your overall monitoring solution. And our APIs have the edge site context information embedded so you do not have to struggle with the classical service impact lookups and alarm enrichment procedures.

Persistent volumes

Terminology note: the structure of an Avassa Application is

  • Application
    • Services: the scheduler granularity and where you mount volumes
      • Containers If you want containers to see the same volume, they need to be part of the same service

An application may store data locally on disk in three ways:

  1. by writing it into a container writable layer: temporary storage, deleted upon container restart
  2. by writing it into an ephemeral volume: data survive container restarts but is deleted by the Avassa platform if the application is removed
  3. šŸ†• by writing it into a persistent volume: data stays even if the application is removed

Ephemeral and persistent volumes are specified as part of your application specification. We have, since the beginning, had support for ephemeral volumes. Lately, we have worked with customers that build database applications that shall run at the edge. For these use cases, it is relevant to be able to remove an application and add another one that can pick up the data from the previous. Since the lifecycle of persistent volumes does not follow the lifecycle of the services, they are managed separately; you need to delete them explicitly.

Example:

name: with-persistent-volume
services:
  - name: persistent-storage
    containers:
      - name: persistent-storage
        mounts:
          - volume-name: storage
            mount-path: /storage
    volumes:
      - name: storage
        persistent-volume:
          size: 10GB

You can list volumes

$ supctl show volumes

- id: with-ephemeral-volume.cache-on-disk-1:cache-volume
  sites:
    - name: stockholm-sergel
      type: ephemeral
      size: 10 GB
      used-by:
        - application: with-ephemeral-volume
          service-instance: cache-on-disk-1
- id: with-persistent-volume.persistent-storage-1:storage
  sites:
    - name: gothenburg-bergakungen
      type: persistent
      size: 10 GB
      used-by:
        - application: with-persistent-volume
          service-instance: persistent-storage-1
    - name: helsingborg-roda-kvarn
      type: persistent
      size: 10 GB

And to delete a persistent volume on a specific site:

$ supctl do volumes with-persistent-volume.persistent-storage-1:storage \\
    sites helsingborg-roda-kvarn delete

or on all sites

$ supctl do volumes with-persistent-volume.persistent-storage-1:storage \\
    delete-from-all-sites

Read more about persistent volumes:


And, one of the best recordings of Falling Leaves by Eva Cassidy on Youtube

Stay tuned for more inventions from the Avassa engineering team during the autumn

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.