Place and placement in distributed edge clouds
The original cloud operational model made life significantly easier for developers and operations teams. It gave them access to on-demand compute, made their applications location-independent and allowed applications to automatically scale out based on traffic loads. This changed the way enterprises thought about IT and applications.
The distributed edge cloud model adds additional features like site-local autonomy, the ability to meet geographical privacy laws and regulations, as well as technical aspects including low-latency applications and very distributed data processing.
They are focusing on improving the online commercial destinations and marketplace experiences based on learnings from the dramatic increase of online shopping over the last two years.
When we say edge clouds we refer to edge locations with a handful of general-purpose compute hosts. This is in contrast with traditional IoT and far edge devices.
The emerging operational model for distributed edge clouds must build on the good things that came out of the cloud operating model while making sure the edge-specific benefits are readily available to developers and operations teams. And it should do this without requiring a significant impact on the ways of working and tooling already in place.
One of the major differences between centralised clouds and distributed edge clouds is the fundamental importance of the specific places of the constituent sites, and the placement of applications on these sites.
Tying together the fundamental notions of place and placement with a cloud-like operating model requires some interesting and edge-specific architectural considerations.
To summarize and make things more explicit let us look at characteristics and how they apply to centralized clouds versus distributed edges.
Place and the distributed edge
The sites that constitute a distributed edge cloud are located in a set of physical locations for specific and valuable reasons. Those reasons can be as varied as:
- An application on a host located at a health care institution to maintain regional patient data integrity – in this case, it is the data residing in the edge node that must stay within a specific geographical area
- An application on a small server behind a retail shop counter to ensure non-stop operation of the point-of-sale system during outages – in this case, it is the fact that the server is in the location where it serves in-store customers and does not rely on centralised resources for continued operations
- An application on a ruggedised PC on a light pole in a suburban center for local analysis of video streams for that specific catchment area – in this case, it is the fact that the application can instantly analyse video data and not rely on high-bandwidth connectivity and low latency to a central cloud
The place of each of these example sites carry meaning and the specific interpretation of that meaning depends on the kinds of applications that will run on top of them.
Good architecture must allow sites to expose an effective representation of their specific place as a fundamental feature.
Precise placement and the distributed edge
And with the concept of location comes the task of fine-grained placement of applications across these locations. The decision where to place application replicas needs to take a couple of things into consideration:
- Business requirements, including geographical constraints or the presence of e.g. low-latency networking in specific sites. These aspects will drive how to roll out new features of an application for trial in e.g. a specific region and if applications can meet service-level agreements based on e.g. predictable latency associated with the site.
- Application needs, for example, resource availability or feature-specific hardware support. This includes support for applications that manage highly sensitive data, in locations lacking strong physical security. This may also require applications to only run on hosts with hardware-assisted secrets-management to meet regulations and policies.
- The concept of rolling out new sites and decommissioning old ones. Adding a new site or removing an existing site impacts the applications that are expected to run (or are already running) on the site. The placement requirements expressed by developer and operations teams are like a contract with the infrastructure, which the infrastructure needs to fulfil consistently over time as sites arrive and depart.
The placement of applications across sites is fundamental to meet business, application and operational requirements.
Good architecture needs to allow application operations teams expressive placement policies to deploy that place and maintain applications in the locations they need to be as the infrastructure is growing.
The concept of place is key to distributed edge clouds. Application-centric edge cloud platforms must include strong features for sites to publish location-specific configuration and state.
And that in turn is the architectural underpinning to support the declarative and precise placement of application replicas in the exact locations where they belong.
Read more about our platform here, and more detailed information about how the Avassa solution uses formal deployment specifications for precise placement of application replicas here.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.