edge-computing

Why and how to extend CI/CD to the edge

There are significant business values of connecting the CI/CD pipeline to the edge:

  • Increase operational efficiencies by consistent and efficient developer experience also for edge applications.
  • Deliver better products faster by automatically deploying and testing applications to the edge with early feedback.
  • No (or very little) training of operations staff is required since their standard cloud tools are also used for edge applications.

Keep reading: CI/CD tools and Avassa: how to build a successful integration

How can this be achieved, and what makes the edge use case different?

Let us skip the traditional CI/CD introduction with the eternity symbol and jump directly to the meat. What is the difference between an existing CI/CD pipeline for the cloud vs. edge? It primarily boils down to the deployment step. In a traditional central cloud-based deployment, the deploy step throws your containerized application to your cloud orchestrator, maybe K8S or an Amazon ECS. The deploy output is a single central point of delivery.

For the edge use case, the case is orthogonal:

  • the application needs to be deployed/upgraded at hundreds or thousands of edge POPs
  • connections to the POP sites can be down and/or slow
  • the constrained characteristics of edge sites imply more complex error and recovery scenarios
  • the deployment configuration needs to be able to specify logical conditions for deployment, such as location, characteristics of POPs, groups of POPs, availability of devices such as cameras
  • testing becomes a distributed process across sites
  • observability needs to be configured across all edge sites

Stringing your existing deploy tools to manage the above challenges might lead to unnecessary complexity and operational challenges.

What is needed? First, you need an orchestrator that manages all the edge clusters; multi-cluster management and deployment is fundamental for the edge. The multi-cluster deployment features must embed inherent mechanisms such as canary and rolling upgrades to manage the edge deployment challenges listed above. Next, the deployment test needs a converging algorithm; probes at one site of thousands are not telling too muchā€”the converging aggregated state of all probes at all sites matters. Finally, no deployment is done until proper operational monitoring is configured. In the same way, as for testing, monitoring needs to cover the application state across all sites and the ability to drill down. More on that in a later article.

At the edge, you don’t run all applications, everywhere, all the time

Above we elaborated on the required edge deployment mechanisms. But what is still lacking is a definition of the deployment definition itself: where and under what conditions shall the applications be deployed? This is critical for the edge use case; we need fine-grained deployment definitions. But we do not want developers to need to know exact zone names and keep the characteristics of the zones in their brains or excel sheets, such as ā€œwhich zones should have application A and which of them have a GPU and a camera?ā€ For that purpose, it is essential to separate two artifacts:

  • An application specification that defines the structure of the application, its containers, and configuration
  • A deployment specification that logically defines where to deploy the application. This needs to be a declarative definition that embeds logical expressions where to deploy the application using abstraction. Rather than enumerating names of explicit zones, it should state features requirements and logical labels.

A significant implication of the second bullet above is that the deployment phase is not a one-shot. The declarative deployment specification needs to be evaluated against the edge sites constantly. For example, if a camera is added to an edge site that matches, the application should instantly be deployed at that edge. We do not want someone to need to press a button. In this way, the edge deployment step is a continuous process that constantly converges the edge state towards the desired state.

We have in a previous article showed the details on how to set up Avassa as the edge deployment engine, meeting all the requirements in this article.

Read more in our white paper on Observability in the distributed edge: The full story.

New call-to-action

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.