February 2023: Feature releases & highlights

This feature releases & highlights report include:

  1. Run commands in a container from the Web UI: You can now navigate to a specific container and execute commands within that container.
  2. Enhanced site search and filtering: When you have thousands of sites, you need efficient search and filtering capabilities in the User Interface. We have added a powerful filter/search tool in all site lists in the Web UI.
  3. NVIDIA GPU passthrough: You can now give a container application access to GPUs on the site.
  4. Local unseal: For security reasons, a restarted host on a site needs to be unsealed by keys from Control Tower or other site hosts. This can now be turned off so that a host unseals itself to prioritize availability over security.

Run commands in a container from the Web UI

The Avassa command line provides an efficient way of running commands directly in the container (”exec interactive”). This is shown below where we are running the ps command within the curtain-controller container

% supctl do --site stockholm-sture applications theater-room-manager service-instances curtain-controller-1 containers curtain-controller exec-interactive sh
/ # ps
PID   USER     TIME  COMMAND
1 root      0:00 /sbin/docker-init -- /bin/sh -c $EXECUTABLE
7 root      0:00 curtain-controller
10 root      0:00 /bin/sh
16 root      0:00 top
24 root      0:00 sh
31 root      0:00 ps

The same functionality has now been added to the User Interface. You can navigate to a running application on a site and select a specific container:

In the above screenshot, we picked the curtain-controller container, and the user has two options to run the command:

  1. “Open terminal” opens an embedded terminal in the list of containers. This is useful for smaller tasks. You can also have several container terminals opened simultaneously and quickly compare the output.
  2. “Open terminal tab”: opens a terminal in a separate browser tab which is useful for more extended sessions

Below we show an example of option 1: we run two parallel command windows for the curtain-controller and the projector-operations containers

The screenshot below illustrates option 2, open a command tab in the browser:

A large number of sites characterize edge computing. That characteristic has implications for the core orchestrator and the User Interface. To address usability, we have added a filter/search component at all places where sites are listed: in the navigation bar to the left, the list of sites after you have selected an application when assigning sites to a tenant, and more. You can filter both on site-labels and several states for the site. We show some examples below:

Show me all sites with the label “customer” that has applications deployed:

Or, after selecting an application, you can search among the sites where the application is running by clicking the “Show filters” link.

After expanding the “Show filters” link, you get the below search fields:

NVIDIA GPU passthrough

It is a common use case to deploy applications at the edge where local processing of, for example, video streams, is required. That, of course, benefits GPU processing. We have now added a convenient way to mount the GPU into your application, “GPU passthrough.”

It is essential to manage the edge infrastructure associated with GPUs.

  • A site provider needs to be able to configure rules on which GPUs are provided to application developers.
  • The scheduling of the applications on the sites must find the hosts with GPUs automatically.

All of these requirements are well covered in our release for GPU support.

As a site provider, you can now list hosts and discovered GPUs:

$ supctl show -s stockholm-sergel system cluster hosts --fields hostname,gpus
- hostname: stockholm-sergel-001
  gpus:
    - uuid: GPU-b75c47d9-5fb4-63e0-a07b-ff2633af741c
      name: Tesla M60
      serial: "0321017046575"
      memory: 7680 MiB
      driver-version: 525.60.13
      compute-mode: Default
      compute-capability: "5.2"
      display-mode: Enabled
      labels: []
    - uuid: GPU-ee1b2a5c-3cd0-0c4a-a240-d87c22748a35
      name: Tesla M60
      serial: "0321017046575"
      memory: 7680 MiB
      driver-version: 525.60.13
      compute-mode: Default
      compute-capability: "5.2"
      display-mode: Enabled
      labels: []

The above output shows that the site has a single host with two GPUs.

We want to disconnect the dependency for the application team to know precisely which hosts have GPUs. Therefore a site provider configures GPU labels, and the application developer refers to these in the application specification. This works in the same way as Avassa device labels.

You can create a system-wide GPU label setting or configure this per site. In the first case, all sites will match the labels.

System-wide example:

$ supctl create system settings <<EOF
gpu-labels:
  label: all
  all: true
EOF

Per site example: on the site stockholm-sergel we will search for Tesal GPUs and create an any-tesla label if found:

$ supctl merge system sites stockholm-sergel <<EOF
gpu-labels:
  - label: any-tesla
    max-number-gpus: 1
    nvidia-patterns:
      - name == "*Tesla*"
EOF

In the latter example above, we show the use of NVIDIA expression syntax. You can match any attribute that appears in the GPU list above.

If we now show the status of the hosts at the stockholm-sergel site:

$ supctl show -s stockholm-sergel system cluster hosts --fields hostname,gpu-labels
- hostname: stockholm-sergel
  gpu-labels:
    - name: all
      matching-gpus:
        - uuid: GPU-b75c47d9-5fb4-63e0-a07b-ff2633af741c
        - uuid: GPU-ee1b2a5c-3cd0-0c4a-a240-d87c22748a35
    - name: any-tesla
      max-number-gpus: 1
      matching-gpus:
        - uuid: GPU-b75c47d9-5fb4-63e0-a07b-ff2633af741c
        - uuid: GPU-ee1b2a5c-3cd0-0c4a-a240-d87c22748a35

You can see that both the site-specific GPU and system-wide GPU labels are created.

Now let us move over to the application definition. A sample application that requires GPUs might look like this:

name: sample-gpu-app
version: 0.0.1
services:
  - name: sample-gpu-service
    mode: replicated
    replicas: 1
    containers:
      - name: sample-gpu-container
        image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
        entrypoint:
          - /bin/bash
        cmd:
          - "-c"
          - sleep infinity
        gpu:
          labels:
            - all

If we deploy this application to a site, we can see that the container gets the GPU

$ supctl show -s stockholm-sergel \\
    applications sample-gpu-app service-instances sample-gpu-service-1 \\
    --fields oper-status,containers/[name,gpus,nvidia-driver-capabilities]
oper-status: running
containers:
  - name: sample-gpu-container
    gpus:
      - uuid: GPU-b75c47d9-5fb4-63e0-a07b-ff2633af741c
      - uuid: GPU-ee1b2a5c-3cd0-0c4a-a240-d87c22748a35
    nvidia-driver-capabilities: compute, utility

Note: The Avassa GPU features assume you have installed the NVIDIA container toolkit on the hosts.

Unsealing an isolated site

The state of an Avassa edge site is encrypted when a site is (re-)started, which means it is “locked” from being used. To unseal the site, it need to get the key from the Control Tower. However, if a site does not have internet connectivity, then it has to be manually unsealed. You can read how to perform that in our documentation.

The above procedure is in place to prioritize security. For example, stealing and starting a host will keep the system state encrypted.

We have now added an option to let a site unseal itself automatically. You can use this option if physical access to the hosts can be prevented satisfactorily and/or the site does not contain sensitive data.

$ supctl merge system sites stockholm-sture <<EOF
> allow-local-unseal: true
> EOF
$ supctl show system sites stockholm-sture
name: stockholm-sture
descriptive-name: Sture
type: edge
...
allow-local-unseal: true

Bonus: Where did GPUs come from?

The history of GPUs dates back to the 1970s when the concept of a specialized processor for graphics was first introduced. It optimizes floating point operations rather than flow control as compared to CPUs. The first GPUs were developed in the 1990s by companies such as NVIDIA and ATI. These early GPUs were used primarily for 2D graphics and video acceleration. The term GPU was coined by Sony in its GPU used in the PlayStation console. Over time, GPUs became more powerful and versatile and were increasingly used for tasks such as scientific computing and machine learning. Today, GPUs are integral to many high-performance edge computing applications such as machine learning and video processing.

It is an exciting path of technology usage, morphing from kids’ video games to business-critical applications.

Try it yourself

Book a demo

Deploy container application across a distributed edge cloud in minutes. Book a demo today to have a closer look at the Avassa platform!

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.