Avassa Edge Site Scalability: Test Report

Edge is different from the cloud in many ways:

  • Limited foot-print
  • Offline requirements
  • Number of sites (the focus of this document)
  • and more…

Knowing that the platform you pick is designed and validated for those requirements is highly relevant. If you pick a solution for the cloud and try to put it to work at the edge, you might stumble. A killer requirement mentioned above is that edge is characterized by managing a large set of sites. This article will describe the scale tests we do at Avassa for managing up to 10,000 sites.

We will walk through our scale tests in the following way

  • Test setup: information on how we did run the tests.
  • The test application: description of the sample application we deploy at the edges.
  • Test results, resource usage: this shows test results concerning resource usage in the central orchestrator, the Control Tower: memory, CPU, disk I/O, and networking.
  • Test results, performance: this shows the time it takes to complete test operations like deploying an application to all sites.
  • Design principles: Background information on how we have designed the Avassa system to manage scale.
  • Appendix with all measurements.

Key takeaways

We regularly perform scale tests for 100, 1,000, 5,000, and 10,000 sites. For each of these test setups, we run relevant test functions such as creating the sites, sites calling home, and deploying applications. The tests are using a simulated edge network that implements the protocol of the Avassa Edge Enforcer agent.

The most important outcomes of the test are:

  • We can manage 10,000 sites with a 3-node cluster of nodes with 4 vCPU and 8 GiB RAM
  • When edge sites boots, they will call home to the Avassa Control Tower and establish a trusted connection. This involves several steps, including certificate management and security functions. To complete the process for 10.000 sites takes about 55 minutes, ~ 0.2 seconds per site.
  • Deploying an application across 10,000 sites also takes about 55 minutes, ~ 0.2 seconds per site.
  • Client API calls summarizing the state of all 10,000 applications and sites respond within 10 seconds.

All performance tests show linear behavior.

These measurements prove that Avassa can manage large Edge deployments with good response times and small host requirements for the central orchestrator.

Test setup

We are running a hosted Control Tower in the same way as for real customer use cases. The Control Tower runs as a cluster consisting of three AWS c6i.large instances:

  • 4 vCPU
  • 8 GiB RAM

The purpose of this test is to stress the Control Tower with the load a large number of sites would incur. In order to do so, we use an edge site simulator that implements the Edge Enforcer protocol (again, the purpose of these tests is not local processing at the edge site, we do that using edge site footprint tests).

In order to see how performance and resource usage is impacted by a growing number of edge sites, we run the same tests with the following number of sites:

  • 100 sites
  • 1,000 sites
  • 5,000 sites
  • 10,000 sites

A site in Avassa is a cluster of one or several hosts. In these tests, we have one host at each site.

Then, for each edge setup above, we test and measure the following scenarios:

  • Create the sites in the Control Tower: This corresponds to a batch load of edge sites. This can, for example, be done in a pre-provisioning scenario. Think of the case where you have an inventory file of all sites and hosts before the hosts are shipped or installed.
  • Edge Enforcer calling home to Control Tower: When hosts with Edge Enforcer boots at the sites, Edge Enforcer calls home to Control Tower. In reality, that is probably distributed over days, if not weeks. But to stress test/load test the Avassa system, we let them all call home almost simultaneously. Control Tower has a built-in throttling mechanism to handle these kinds of scenarios.
  • Deploy an application across all edge sites. This is one of the most important tests. You have a new or upgraded application that needs to be rolled out to a large set of edge sites. This test measures the initial deployment use case; no images or secrets are available at the sites at the start. An upgrade scenario will be simpler; only the updated layers within each updated container image will be distributed to the sites.
  • Perform a distributed pub/sub bus query (Volga) towards all sites and collect the result. Avassa has a built-in pub/sub bus for collecting telemetry and logs. One instance runs at each site and one in the central Control Tower. You can perform distributed queries from the Control Tower asking for specific data from the sites.
  • Perform client requests to the Avassa Control Tower. This measurement gives an indication of the API responsiveness for large systems. It also indicates response times for the command line tool and the UI, both of which use the API.
  • Delete the application on all edge sites.
  • Delete all sites from the Control Tower: returning to an ā€œemptyā€ system.

Each of the steps above involves a number of actions in the Avassa system. It is essential to have a broad understanding of what happens in order to interpret the numbers. Below we indicate actions taken in the Control Tower that are relevant to the measurements.

Create the sites in the Control Tower

  • A distribution certificate for each host is generated. This certificate is used to authenticate host-to-host communication within a site.
  • A site-specific API certificate is generated; this certificate is used in the site-local API.
  • Site-unique encryption keys are generated.
  • Control Tower assigns controller nodes.












Edge Enforcer calling home to Control Tower

  • A pub/sub bus connection is established to the site.
  • Site configuration is communicated to the site hosts.
  • Certificates and secret keys are distributed to the site.
  • The site seal token is stored in the Control Tower.
  • The site will automatically upgrade to the latest Edge Enforcer image if required (if the site is already running the most recent version, the download step is skipped – we chose to simulate the worst-case scenario that all sites are out of date).










Deploy an application across all edge sites.

  • Control Tower has a built-in container registry, and all sites will pull the application images from this registry. This puts performance requirements on the Control Tower for this use case. The rationale for this design is that we can not assume that edge sites can not reach to an arbitrary registry on the internet.
  • Images are locally replicated if the site has several hosts.
  • Distribute application secrets to the corresponding sites.
  • The site starts the application using the container runtime.
  • The site reports a result back to the Control Tower.









Volga query across all sites

  • Form a site query and distribute that to all sites. In the test, we have formed a query where each site responds with 100 entries.
  • Collect and merge results.















Perform Control Tower API requests.

  • Get site summary status.
  • Get application deployment status.

















Delete the application from all edge sites

  • An application delete instruction is published on the pub/sub bus to the sites.
  • The application will be stopped at the site.
  • The result of the stop will be reported back to the Control Tower.














Delete all sites

  • A ā€œsite deleteā€ message is published on the pub/sub bus to the sites.
  • The site itself cleans up and removes all data.
  • The Control Tower removes all states kept for the deleted site.















For each test setup, we measure the following metrics on the three Control Tower instances:

  • Time for the task to be complete
  • Memory, CPU, Disk I/O, Network receive/transmit

The test application

We use the movie theaters demo example application as a test application. It contains three multi-architecture containers. The site will only pull the relevant architecture. In this test setup, the sites are single architecture (x86)

The compressed size of the one architecture layers are roughly (docker manifest inspect -v <container>):

  • projector-operations: 69 MB
  • digital-assets-manager: 69 MB
  • curtain-controller: 64 MB
  • Secrets:
    • 2 secrets are distributed to each site where the application is deployed

Note that the compressed size is relevant since that is what is downloaded from the Control Tower to the sites, not the uncompressed size running in the container runtime.

Test results: Resource usage

The graphs below show resource usage for 10,000 sites. The appendix shows the same graphs for 100, 1,000, and 5,000 sites.

The test cases mentioned above, create sites, call home, etc, are performed in sequence, and each blue bar in the diagram corresponds to the start of each case. The diagram shows the resources across the three hosts in the cluster.

Memory

We can see that we are below 4 GiB on all hosts.

CPU

The load is well spread amongst the hosts. The full batch delete sites are the most demanding operations.

Disk IO

Network

When updating the Edge Enforcer at call home and application deployment, image download is the most network-demanding operation.

Comments to resource measurements

We can see that the Control Tower uses around 2G RAM and less across the three hosts for up to 10,000 sites

CPU load is evenly spread across the three hosts. And the most CPU-intensive operations are:

  • create and delete sites
  • deploy application

We are still well-equipped to run all test cases at 10,000 sites from a CPU usage perspective (4 vCPUs).

The Control Tower serves images to all sites, which can be seen by the network transmit load for the deploy application test case.

Test results, performance

All individual measurements (100, 1,000, 5,000, and 10,000) are indicated with their ā€œtime to completeā€ values in the diagram.

Create sites

Creating sites in the Control Tower shows a linear time dependency. As described above, the cryptographic functions are the most compute-intensive. As a general rule of thumb, you can see that it’s a matter of minutes per thousands of sites.

On average, it takes 0.03-0.05 seconds to create sites.

Edge Enforcers call home to Control Tower.

As described at the beginning of this article, the call-home process involves many steps, including upgrading the Edge Enforcer, distributing keys to the site, and more. We see roughly 5 minutes per thousand of sites.

On average, it takes 0.2-0.3 seconds per site to perform the call-home operation.

Deploy an application across all edge sites.

The most important measurement for an Avassa system is ā€œHow fast can you install an application at your edge sites?ā€

This involves distributing the container images, and here we had three images of about 65 MB in size each. For upgrade scenarios, the Control Tower only sends modified application image layers, which is faster in all cases.

On average, deploying an application takes 0.2-0.3 seconds per site.

Perform a distributed Volga query (pub/sub bus) towards all sites and collect the result.

The Avassa built-in pub/sub bus runs an instance at each site and in the central Control Tower. Producers publish data, and consumers subscribe to data from specific topics. Examples of topics are host metrics, audit logs, and container logs. The latter implies that Avassa collects all container log entries at each site. From the central Control Tower, you can perform a query searching for specific entries across a selected set of sites.

This test searches logs across all sites in the test and merges the result

Response times range from sub-seconds for 100 sites to 10 seconds for 10,000 sites.

Client requests

Performing client requests that summarize all sites and applications ranges from sub-second to three seconds for 10,000 sites. This proves that command line tools, the Avassa UI, and REST API are responsive for large deployments.

The API calls are:

Avassa scaling design principles

With large ambitions on the number of sites we must support, we have made a couple of conscious design decisions. A primary principle is to push most of the work down to each site, the Edge Enforcer, and minimize the processing in the Control Tower. Not only does it enable scaling and distribution of processing load, but It also makes the sites autonomous in case of network outages.

Transport

Edge Enforcers at each site need a secure connection to the Control Tower. We realized that TCP-based TLS could become a bottleneck for large deployments. To simplify the programming we did not want to multiplex all control channels over a single TLS tunnel. Using TCP-based TLS would then potentially have multiple connections per host. This comes with a cost both in TLS negotiations and maintaining a lot of TLS states in the Control Tower.

Hence we decided to utilize the QUIC protocol. QUIC is TLS over UDP on steroids. It comes with built-in support for multiplexing streams over a single QUIC connection. It is a double win for us, a single connection and less complexity.

Efficient publish and subscribe.

Over the QUIC connection, we run our pub/sub protocol, Volga. When the Control Tower has work requests for all or a subset of sites, it publishes the messages locally in the Control Tower. The message is tagged with target sites. Each site maintains a subscriber, over QUIC, in the Control Tower for these messages. The Control Tower producer can then be very lightweight and it’s up to each site to keep track of received messages. If the site has been disconnected for some time, it will ask for a replay of missed messages.

Volga also comes with a distributed query function. When a query is executed in the Control Tower, all the Control Tower has to do is forward the query to the selected sites, and the query is executed locally on each site. This keeps the work the Control Tower has to perform to a minimum.

Push work to the edge.

When deploying an application to a subset of sites, all the Control Tower has to do is publish a message on Volga indicating what sites should start (or upgrade or delete…) this particular application, a very lightweight task for the Control Tower. The heavy lifting, with the fine-grained scheduling, is done on each site.

The general idea is simple, do as little as possible in the central location, Control Tower, and push as much work as possible towards the edge.

Conclusion

At Avassa we focus on simplifying the lifecycle management of applications at the edge. The number of edge sites can easily be tens of thousands for serious deployments. In this article, we have described the tests we run to validate scaling. And we all know that you need to test and measure scale. You will never know how a system with hundreds of nodes will behave when you move to 10.000 nodes. All our tests show that 10.000 nodes can easily be managed with a small footprint for the Avassa Control Tower (central orchestrator) and fast application deployment times across all sites.

Appendix

Below are test results for all test batches: 100; 1,000; 5,000; and 10,000 sites/clusters.

100 sites

Memory

CPU

Disk IO

Network

1000 sites

Memory

CPU

Disk IO

Network

5000 sites

Memory

CPU

Disk IO

Network

10 000 sites

Memory

CPU

Disk IO

Network

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.