Defining the edge in edge computing
Using the right words is essential, but in most cases, one word means different things to different people. In many situations, the definition of a word depends on the context. This is very true for the word āedgeā. Where is that edge? Are there several edges? My edge or your edge? In this article, we will elaborate on different kinds of edges without the ambitious goal of establishing a new taxonomy. When you read this article, focus on the difference between the various edges, and do not pick a single term. Due to lack of a standardized terminology you will most likely see other terms for the same thing in other articles.
Learn more: Which edge do you use with Avassa?
There are two major dividers:
- The network last mile: This draws the line between āmyā edge or āyourā edge, depending on whether you are an enterprise/user or a service provider. The edge that belongs to the enterprise/user sits after the last mile. The edge that belongs to the cloud service provider sits before the last mile.
- Compute ownership: This delineates whether the enterprise owns the compute or if the cloud service provider manages it. Put another way, who owns and manages the Point of Presence, PoP.
Using these two components as a base, we can draw a matrix of edges like shown in the table below. We see that different edges form a mosaic where one edge connects to another:

To the left of the last mile divider, we see the edge from the provider’s perspective, where their infrastructure and network ends. To the right, we see the edge from the user’s perspective, where they ālive on the edge.ā This is why LF Edge, in their edge taxonomy paper, calls the left part āService Provider Edgeā and the right part āUser Edgeā, respectively.
We will now go into more detail about the above matrix, referring to each cell by the associated number.
Regional and Local Edges
Regional and local edges are managed by the cloud service providers (1) or owned by your organization’s distributed data centers (2). They stop before the last mile to your end users point of presence (PoPs). By definition, you have no edge applications within your own PoP, your retail store, your robot, etc. Instead you connect to the edge of the provider.
We can use AWS to illustrate the first category: managed by the cloud service providers (1). AWS edge locations are closer to users than Regions or Availability Zones, often placed in major cities so that response times can be relatively low. AWS currently provides 400+ edge locations. AWS edge locations are not full-fledged data centers where you can run any workload. For that purpose, AWS is launching Local Zones. Local zones are available in a few major locations so far. Local zones target single-digit millisecond latency.
Dedicated edge service providers like StackPath focus on providing full functionality at each edge location. In their model, all data centers/POPs offer generalized compute, such as running containers, and focus on being available as close to users as possible. In addition, more regional-focused providers can manage local regulations.
CDNs like akamai.com and Fly.io are another important example of this kind of edge. Some of them existed even before the term āedgeā was coined. CDNs are also moving more towards providing generalized compute to their users.
From a terminology point of view, what we described above (1) is sometimes also called ācloud edge,ā meaning at the edge of the cloud provider.
But the regional/local edge can also be owned and managed by your enterprise (2). This is where we see private data centers and various Kubernetes solutions. Enterprises might choose this option due to regulatory reasons and because they can optimize the PoP locations for their needs. The trade-off is the operational cost of managing the edge stack and PoPs.
On-site Compute Edge
The on-site compute edge resides within an organization’s boundary and is physically close to the users. In most cases, the compute platform is owned and managed by the user organization (4). However, there are hybrid models where the cloud provider puts a managed compute within an organization’s boundary (3), for example, Amazon Snowball or HP GreenLake.
On-site compute edge can range from single, small hosts like Intel NUCs to a smaller rack of servers. The essential characteristic is that it runs within the premise and uses the local network. Furthermore the number of nodes within a cluster is small, typically less than 10.
Ideally, it should not depend on last mile and internet connectivity so that it runs well for more extended periods. Another essential characteristic of the compute edge is that it provides a generalized compute stack on Linux and containers. In that sense, it provides a flexible and agile place to run any software close to the physical reality and users.
But it’s not enough to rely solely on a set of hosts with Linux and Docker on each compute location. You’ll need a local cluster manager at each location to manage applications on the local compute edge combined with a central management and orchestration solution. To read more about this, take a look at our previous article on this topic.
Compute edge optimizes the promise of the edge, focusing on low latency, autonomy, and digital experience at the location where the users, or robots, for that matter, are. For a true compute edge solution, you are not dependent on the quality of the last mile network. This is the space where we at Avassa provide an efficient solution. Other players in this space are, for example, Rancher in combination with K3S.
The terminology is tricky as you sometimes hear a tiny rack of servers in the compute edge scenario referred to as an āedge cloudā, which is not to be confused with the ācloud edge.ā

Devices Edge
Devices edge refer to the smaller, constrained devices such as micro-controllers, wearables, PLC, and embedded devices. These devices are typically not based on open compute capable of running containers. Instead, they have very limited compute capabilities and are, in many cases, often updated with total images, including OS and applications. We can also include thin clients in this category.
There are vertical, fully managed far edge solutions provided by vendors (5), like cash registers and industrial devices, that fall into this category. In this case, the user organization outsources the complete edge to their device provider. But an organization can also use small, general-purpose devices which are typically connected to the compute edge. Imagine, for example, camera devices connected to Linux servers which run video analytics within a factory. In many cases the devices are connected through the on-prem compute edge capabilities.
Keep reading: What is edge computing?

Edge Scenarios
With the matrix above, we can illustrate different strategies for running applications at the edge:
- Amazing Coffe Cafeteria Inc: Optimizing for cost. The coffee shops are equipped with browser-based terminals communicating with backend systems in regional edge. This falls into the category ādevices edgeā with thin clients and no on-prem compute edge: (1) + (5).
- Slick t-shirts Inc: Modern, trendy clothing shop, optimizing buyersā experience. Artisan t-shirts are provided in design boutiques. They focus on building a community of buyers with repeated sales by using in-store individual offers. Therefore they have applications running in the store for digital signage and POS. Furthermore, visitors use their app in the store. For this experience to work, the retailer deploys an in-store compute edge in combination with their backend applications running in their own regional PoPs : (2) + (4)
- Modern Mining Inc: Operates mines, focus on personal safety. Needs, amongst other things, to keep track of individual personnel locations for safety reasons. Therefore the personnel has wearable sensors communicating with local compute at each mine and the integrated safety system. Due to the sensitivity of the data, the organization uses its own regional data centers. They have modern, local compute to provide horizontal solutions for running many containerized applications at the mine. (2) + (4) + (6)
What is my preferred edge?
Imagine you have developed a containerized application on which your factories, stores, buildings, and so on, depend. Should you run them at a regional/local edge using a service provider or within your premises on edge compute?
As with all exciting questions, the answer is āit depends.ā Historically with the cloud movement, we have seen the benefits of running things in the cloud, the most compelling being no need to buy and manage hardware and ease of deployment. To some degree, the perimeter-based security functions of the service provider can help address some security concerns.
So why should you drop your containers on compute within your premises? Some of the main reasons are:
- Shortest latency possible: Many applications suffer even from latency characteristics between your premise and the service provider. If you need single-digit latency, on-prem edge is the way to go in most cases.
- Local storage: Sensitive data-sets like call logs and medical data within hospitals are not allowed to leave the secure location due to security reasons.
- Resilience: Protection against last mile and network issues. If you run hundreds of stores, you still want to be able to sell coffee even if some shops lose connectivity. Business continuity is a key concern here.
- Regulations: GDPR, for example, is a regulation that might stop you from putting data within the service provider boundaries.
- Control: You might need to control upgrades and maintenance procedures completely. Although cloud service providers will relieve you from some operational tasks, they still force their upgrades, and it might hit your running business.
Terminology soup
Other commonly used edge terms are:
Near edge and far edge: These are terms used by (cloud) service providers. From their perspective, near edge refers to their infrastructure edge and far edge refers to the infrastructure furthest away from them. | MEC: Multi-access edge computing, formerly mobile edge computing, is an ETSI-defined network architecture concept that enables cloud computing at the edge of a network, particularly mobile networks. Imagine compute edge as described here available at each base station. |
Fog computing: In some circumstances, fog and edge are used interchangeably. But most definitions claim that fog computing always uses edge computing. Edge computing, however, might or might not use fog computing. Also, fog includes the cloud, while edge doesn’t. This implies that the fog sits between edge devices and the cloud. It bridges the network and provides local compute to process the data from edge devices. This article considers the compute edge to provide both the fog and the devices. | Cloud Edge vs Edge Cloud: These terms mean very different things, but if you read them too quickly, they look very similar. The cloud edge refers to the edge of the cloud service provider, where the last mile starts. On the other hand, an edge cloud refers to cloud characteristics at the edge, even at the compute edge. For example, if you form a small cluster of three servers within your retail store, that is to some degree a constrained cloud. |
For a larger bowl of soup, see Open Glossary of Edge Computing
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.