How easy is it to develop and manage the lifecycle of edge applications?
According to The Reality of Edge Application Development, here is where the industry is today:
Developing, deploying and maintaining applications at the edge remains incredibly painful today.
That is a relatively strong statement. Some of the underlying challenges are technical and related to the (lack of an) edge platform itself. Part of the problem also lies in the complex interplay between the teams.
Application developers need a platform that makes it delightfully easy to deploy their applications.
The IT and Platform teams are responsible for delivering a useful platform. Within IT, we also have the operations team that shall guarantee application availability. The operations team needs a platform to maintain and monitor the platform and edge applications. Application and operations teams are clients, or customers, if you so like, to the platform team. It is essential that all three have a common understanding of the challenges and possible solutions.
What do I need as an application developer then? As we wrote in a previous article: I just want to run my containers. If we simplify a bit to a few fundamental requirements:
- I should be able to declaratively specify my application configuration, including required devices, GPUs, and storage. The local scheduler on the site should find the optimal hosts where the application should run.
- I should be able to build multi-architecture container images, “Build once, run anywhere”. The scheduler should automatically deploy the containers to the corresponding edge hosts.
- I need edge native APIs for events and secrets.
- And my development CI/CD pipeline should be integrated with the edge platform.
It’s important to realize that it combines application APIs and platform features.
To deploy these applications, the platform team needs to provide an edge application platform with at least the following features:
- Deployments across a large set of edge sites with varying network connectivity.
- Use a local scheduler per edge site to ensure applications run well even without WAN connections. The local scheduler should automatically deal with different architectures and the availability of devices such as cameras and GPUs. Fail-over scenarios should be managed locally per edge site without needing connectivity with the central cloud.
- Easy configuration changes of edge applications across edge sites. It is also essential to allow for flexible configuration of different configurations per site to avoid configuration explosion.
- No-hands application networking at each site. Application developers are not networking wizards; each site should not require manual or complex network configuration tasks.
- Securely install, configure, and manage the life cycle of the platform, including API and tooling upgrades.
- No-hands boot-strapping of edge hardware.
And to top that off with operational requirements. In the central cloud, we have experienced operations teams managing a few centralized platforms and applications. For the edge, it’s the opposite situation. Edges are primarily located in places without technical personnel. And in contrast to monitoring a few central applications, you need to monitor thousands of applications across edge sites. You can read more about edge application monitoring in a dedicated article.
With all these requirements, you risk ending up in a complex platform project. Again, as an application developer, you want to deploy your containers and associated configuration and resources. The platform team needs a fully automated way of adding sites, managing the platform, and providing the deployment tools to the application team. Finally, the operations team needs to be able to monitor the individual applications at each site efficiently.
Let us walk through a scenario to illustrate how it could work. We have the following three teams:
- Application developer:
- Platform team:
A week at the edge company
Step 1: Applifier defines the application
Applifer has developed an AI/ML application that performs anomaly detection based on video input. It needs a GPU and a specific camera on the host where it runs. It will now be deployed across the sites where we have a specific customer, “security.inc”.
She defines an application definition version 1.0 and drops it into the CI/CD pipeline. It simply defines:
- Pick these containers from my public registry address
- Require a GPU
- Require a camera of the model “high-res” on the host.
- Mount a volume for local data
- Claim secrets to authenticate to local systems at the site
When she wrote the application, she utilized an edge native event streaming service provided by the platform to publish detected anomalies found in the video stream.
Step 2: Deploy through Platrick´s platform
The platform team lead by
Platrick has installed and configured an edge deployment engine that lets the application team efficiently deploy applications to edge sites using label matching. Therefore, a one-liner deployment configuration is defined to match sites with “security.inc”. That way, he does not need to know the exact list of sites and hosts that will run the application. Some of the sites are not connected at the moment, but the deployment engine continues to ensure that the applications are finally up and running at all relevant sites. The local scheduler on each site will automatically place the application where the GPU and camera constraints are met. The platform team has a no-hands update for the complete platform and application APIs for all the sites.
Once the application is deployed, it can publish detected anomalies on the edge native bus; in case of network outages, these are automatically cached and pushed later.
Step 3: Oprah performs (no-hands) daily operations
Oprah can see the health of all the individual edge applications as well as overall aggregated site and application health. She can drill down to analyze issues per site and application and dependencies between the edge infrastructure and applications.
During operations, some sites are disconnected. Application failures appear at the sites, but the local scheduler will restart and if needed, reschedule the applications to appropriate hosts within the site without requiring connection to the central control plane.
Step 4: New application version
Applifier delivers a new version 1.1 of the application, a new container version, and also new configuration to go along with it. The automated pipeline updates the application definition, and the associated deployment pushes it to the correct edges.
The Avassa edge platform
At Avassa, we are convinced edge application management can be exactly as easy as illustrated above. We address the three personas,
Oprah, and make things hassle-free for them. We provide an edge orchestration platform with embedded edge native APIs for the developers. All with minimized operational overhead and platform maintenance.
We can claim that we are one of the few Edge platforms that provide complete self-service for edge application developers.
You can see our solution in action at the Edge Field Days.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.