Defining the Edge in Edge Computing: A Comprehensive Guide
Using the right words is essential, but in most cases, one word means different things to different people. In many situations, the definition of a word depends on the context. This is very true for the word “edge”. Where is that edge? Are there several edges? My edge or your edge?
What is Edge Computing? A Clear Definition
Edge computing refers to the practice of placing computing resources, such as servers and data storage, closer to the end-users and devices they interact with. This proximity reduces latency and improves performance by processing data locally rather than relying solely on centralized cloud data centers.
In this article, we will elaborate on different kinds of edges without the ambitious goal of establishing a new taxonomy. When you read this article, focus on the difference between the various edges, and do not pick a single term. Due to lack of a standardized terminology you will most likely see other terms for the same thing in other articles.
Learn more: Which edge do you use with Avassa?
The Difference Between Edge and Cloud
While they are all important components of a modern infrastructure strategy, the difference between cloud and edge is key. Cloud computing centralizes data processing in large data centers, often far from where data is generated. It offers scale, flexibility, and efficiency, but it also assumes unlimited resources. Edge computing moves workloads closer to where data is produced, reducing latency and the need for constant connectivity. This includes placing compute directly on-site in a store, hotel, or factory floor. In IT, that means faster response and resilience even when the cloud connection drops. In hospitality, it powers local guest experiences without delay. In industrial settings, it ensures machines react in real time for safety and performance. Understanding the differences shapes how you design systems that are intelligent, fast, and in control.
Why Compute at the Edge? Benefits and Use Cases
Low Latency and Real-Time Processing
When applications run closer to where data is created, response times shrink from seconds to milliseconds. That speed matters when a checkout system needs instant validation, a sensor must trigger a safety mechanism, or a guest expects seamless digital interaction. Edge computing keeps processing local, so operations continue smoothly even if connectivity to the cloud slows or fails.
Edge Examples in retail, automotive, and Industrial Sectors
In retail, edge computing runs local point-of-sale systems, digital signage, and in-store analytics without waiting for cloud responses. It ensures transactions complete instantly and promotions update in real time. In automotive, it supports connected vehicle systems, processing data from sensors and cameras directly within the car or service center for faster insights and safer performance. In industrial environments, edge computing powers predictive maintenance and real-time control of production lines, reducing downtime and improving safety. Across all three sectors, the edge delivers speed, autonomy, and reliability where operations can’t afford delay. And these are just examples, edge computing is growing across a diverse set of industries.
Cost, and Compliance Advantages
Running workloads at the edge reduces data transfer and cloud usage costs by processing locally before sending only what’s essential to the cloud. It also strengthens compliance by keeping sensitive information within defined geographic or physical boundaries. For organizations operating under strict data regulations, edge computing creates a more controlled, cost-efficient, and resilient environment.
Distinguishing the “Edge”: Key Concepts and Terminology
There are two major dividers:
- The network last mile: This draws the line between “my” edge or “your” edge, depending on whether you are an enterprise/user or a service provider. The edge that belongs to the enterprise/user sits after the last mile. The edge that belongs to the cloud service provider sits before the last mile.
- Compute ownership: This delineates whether the enterprise owns the compute or if the cloud service provider manages it. Put another way, who owns and manages the Point of Presence, PoP.
Using these two components as a base, we can draw a matrix of edges like shown in the table below. We see that different edges form a mosaic where one edge connects to another:

To the left of the last mile divider, we see the edge from the provider’s perspective, where their infrastructure and network ends. To the right, we see the edge from the user’s perspective, where they “live on the edge.” This is why LF Edge, in their edge taxonomy paper, calls the left part “Service Provider Edge” and the right part “User Edge”, respectively.
We will now go into more detail about the above matrix, referring to each cell by the associated number.
Types of Edge Computing: Regional, Local, and On-Site Edges
1. Regional and Local Edges
Regional and local edges are managed by the cloud service providers (1) or owned by your organization’s distributed data centers (2). They stop before the last mile to your end users point of presence (PoPs). By definition, you have no edge applications within your own PoP, your retail store, your robot, etc. Instead you connect to the edge of the provider.
We can use AWS to illustrate the first category: managed by the cloud service providers (1). AWS edge locations are closer to users than Regions or Availability Zones, often placed in major cities so that response times can be relatively low. AWS currently provides 400+ edge locations. AWS edge locations are not full-fledged data centers where you can run any workload. For that purpose, AWS is launching Local Zones. Local zones are available in a few major locations so far. Local zones target single-digit millisecond latency.
Dedicated edge service providers like StackPath focus on providing full functionality at each edge location. In their model, all data centers/POPs offer generalized compute, such as running containers, and focus on being available as close to users as possible. In addition, more regional-focused providers can manage local regulations.
CDNs like akamai.com and Fly.io are another important example of this kind of edge. Some of them existed even before the term “edge” was coined. CDNs are also moving more towards providing generalized compute to their users.
From a terminology point of view, what we described above (1) is sometimes also called “cloud edge,” meaning at the edge of the cloud provider.
But the regional/local edge can also be owned and managed by your enterprise (2). This is where we see private data centers and various Kubernetes solutions. Enterprises might choose this option due to regulatory reasons and because they can optimize the PoP locations for their needs. The trade-off is the operational cost of managing the edge stack and PoPs.
2. On-site Compute Edge
The on-site compute edge resides within an organization’s boundary and is physically close to the users. In most cases, the compute platform is owned and managed by the user organization (4). However, there are hybrid models where the cloud provider puts a managed compute within an organization’s boundary (3), for example, Amazon Snowball or HP GreenLake.
On-site compute edge can range from single, small hosts like Intel NUCs to a smaller rack of servers. The essential characteristic is that it runs within the premise and uses the local network. Furthermore the number of nodes within a cluster is small, typically less than 10.
Ideally, it should not depend on last mile and internet connectivity so that it runs well for more extended periods. Another essential characteristic of the compute edge is that it provides a generalized compute stack on Linux and containers. In that sense, it provides a flexible and agile place to run any software close to the physical reality and users.
But it’s not enough to rely solely on a set of hosts with Linux and Docker on each compute location. You’ll need a local cluster manager at each location to manage applications on the local compute edge combined with a central management and orchestration solution. To read more about this, take a look at our previous article on this topic.
Compute edge optimizes the promise of the edge, focusing on low latency, autonomy, and digital experience at the location where the users, or robots, for that matter, are. For a true compute edge solution, you are not dependent on the quality of the last mile network. This is the space where we at Avassa provide an efficient solution. Other players in this space are, for example, Rancher in combination with K3S.
The terminology is tricky as you sometimes hear a tiny rack of servers in the compute edge scenario referred to as an “edge cloud”, which is not to be confused with the “cloud edge.”

3. Device Edge
Devices edge refer to the smaller, constrained devices such as micro-controllers, wearables, PLC, and embedded devices. These devices are typically not based on open compute capable of running containers. Instead, they have very limited compute capabilities and are, in many cases, often updated with total images, including OS and applications. We can also include thin clients in this category.
There are vertical, fully managed far edge solutions provided by vendors (5), like cash registers and industrial devices, that fall into this category. In this case, the user organization outsources the complete edge to their device provider. But an organization can also use small, general-purpose devices which are typically connected to the compute edge. Imagine, for example, camera devices connected to Linux servers which run video analytics within a factory. In many cases the devices are connected through the on-prem compute edge capabilities.
Keep reading: What is edge computing?
What is an Edge Location? Exploring Edge Points of Presence
Edge Scenarios
With the matrix above, we can illustrate different strategies for running applications at the edge:
- Amazing Coffe Cafeteria Inc: Optimizing for cost. The coffee shops are equipped with browser-based terminals communicating with backend systems in regional edge. This falls into the category “devices edge” with thin clients and no on-prem compute edge: (1) + (5).
- Slick t-shirts Inc: Modern, trendy clothing shop, optimizing buyers’ experience. Artisan t-shirts are provided in design boutiques. They focus on building a community of buyers with repeated sales by using in-store individual offers. Therefore they have applications running in the store for digital signage and POS. Furthermore, visitors use their app in the store. For this experience to work, the retailer deploys an in-store compute edge in combination with their backend applications running in their own regional PoPs : (2) + (4)
- Modern Mining Inc: Operates mines, focus on personal safety. Needs, amongst other things, to keep track of individual personnel locations for safety reasons. Therefore the personnel has wearable sensors communicating with local compute at each mine and the integrated safety system. Due to the sensitivity of the data, the organization uses its own regional data centers. They have modern, local compute to provide horizontal solutions for running many containerized applications at the mine. (2) + (4) + (6)
The Role of Edge Locations in Edge Computing
An edge location is a physical or virtual site where edge computing resources are deployed closer to the source of data generation. These locations typically host lightweight infrastructure such as Avassa’s Edge Enforcers, enabling on-site application hosting, low-latency data processing, and real-time analytics. By moving compute and storage capabilities closer to the edge, these sites reduce dependence on centralized cloud infrastructure and help improve responsiveness, privacy, and operational resilience. Whether operating in a single-host setup or as part of a clustered deployment, each edge location plays a vital role in maintaining local autonomy while remaining orchestrated through centralized management systems like the Avassa Control Tower.
Choosing Your Edge: Finding the Right Edge Computing Solution
Imagine you have developed a containerized application on which your factories, stores, buildings, and so on, depend. Should you run them at a regional/local edge using a service provider or within your premises on edge compute?
As with all exciting questions, the answer is “it depends.” Historically with the cloud movement, we have seen the benefits of running things in the cloud, the most compelling being no need to buy and manage hardware and ease of deployment. To some degree, the perimeter-based security functions of the service provider can help address some security concerns.
So why should you drop your containers on compute within your premises? Some of the main reasons are:
- Shortest latency possible: Many applications suffer even from latency characteristics between your premise and the service provider. If you need single-digit latency, on-prem edge is the way to go in most cases.
- Local storage: Sensitive data-sets like call logs and medical data within hospitals are not allowed to leave the secure location due to security reasons.
- Resilience: Protection against last mile and network issues. If you run hundreds of stores, you still want to be able to sell coffee even if some shops lose connectivity. Business continuity is a key concern here.
- Regulations: GDPR, for example, is a regulation that might stop you from putting data within the service provider boundaries.
- Control: You might need to control upgrades and maintenance procedures completely. Although cloud service providers will relieve you from some operational tasks, they still force their upgrades, and it might hit your running business.
Factors to Consider When Selecting an Edge Computing Platform (Latency, security, scalability, cost)
When evaluating an edge computing platform—especially one that extends all the way to on-site edge locations rather than stopping at regional edges—there are several critical factors to consider.
- Latency is paramount; the platform should support real-time or near-real-time processing close to where data is generated.
- Security becomes more complex at the on-site edge, where physical environments may lack the protections of a datacenter, making strong identity management, encrypted communication, and secret handling essential.
- Resilience is also key: the platform must handle network disruptions gracefully and allow local operations to continue even if central connectivity is lost.
- Scalability matters not only in the number of deployed sites, but in managing configuration, observability, and updates across them.
- Finally, cost must be assessed holistically—not just infrastructure and licensing, but also operational overhead, since managing on-site edges often demands more thoughtful planning than regional deployments.
Choosing a platform that abstracts and automates much of this complexity is essential for long-term success.
The Security and Privacy Implications of Edge Computing
<H3> Securing Data at the Source: Challenges and Best Practices
Edge computing spreads workloads across many sites, which expands the security surface. Each location becomes a potential entry point. The answer is to protect data where it’s created. Use strong identity control, encrypted connections, automated updates, and centralized visibility. Security should be built into deployment from the start, not added later. Learn more in this article: Securing the Edge: Tackling Distributed Security Challenges.
Data Sovereignty and Compliance in a Distributed World
Many organizations must keep data within defined borders to meet privacy laws. Edge computing makes this possible by processing and storing information locally. A retailer can keep customer data on-site. An automaker can manage connected systems without exporting telemetry. An industrial operator can meet audit rules and safety standards while maintaining control. Edge computing unlocks an efficient path towards compliance, without compromising innovation.
Real-World Use Cases of Edge Computing (Autonomous vehicles, industrial IoT, healthcare)
Explore some real-world Use Cases of Edge Computing by browsing our Customer Testimonials.
Edge Computing Terminology Explained
Other commonly used edge terms are:
| Near edge and far edge: These are terms used by (cloud) service providers. From their perspective, near edge refers to their infrastructure edge and far edge refers to the infrastructure furthest away from them. | MEC: Multi-access edge computing, formerly mobile edge computing, is an ETSI-defined network architecture concept that enables cloud computing at the edge of a network, particularly mobile networks. Imagine compute edge as described here available at each base station. |
| Fog computing: In some circumstances, fog and edge are used interchangeably. But most definitions claim that fog computing always uses edge computing. Edge computing, however, might or might not use fog computing. Also, fog includes the cloud, while edge doesn’t. This implies that the fog sits between edge devices and the cloud. It bridges the network and provides local compute to process the data from edge devices. This article considers the compute edge to provide both the fog and the devices. | Cloud Edge vs Edge Cloud: These terms mean very different things, but if you read them too quickly, they look very similar. The cloud edge refers to the edge of the cloud service provider, where the last mile starts. On the other hand, an edge cloud refers to cloud characteristics at the edge, even at the compute edge. For example, if you form a small cluster of three servers within your retail store, that is to some degree a constrained cloud. |
For a larger bowl of soup, see Open Glossary of Edge Computing
How Different Edge Concepts Interconnect (Comparison of cloud, edge, and hybrid models)
1. Cloud Computing
Cloud computing centralizes applications and data in large-scale data centers. It offers virtually unlimited compute and storage, making it ideal for heavy processing, long-term data storage, and global services. However, it often falls short in use cases that require low-latency response, localized data handling, or offline resilience.
2. Edge Computing
Edge computing shifts compute resources closer to where data is generated—on-site or in nearby on-site locations. This model reduces latency, enhances data privacy, and ensures applications continue operating during connectivity disruptions. It’s particularly valuable for real-time or business-critical applications in retail, manufacturing, healthcare, and industrial settings.
3. Hybrid Models
Hybrid models blend cloud and edge computing. They allow central cloud platforms to handle global coordination, heavy analytics, and archival storage, while edge nodes manage local decision-making and immediate data processing. This creates a flexible, resilient infrastructure that adapts to both centralized oversight and decentralized execution.
How They Interconnect
These models are not mutually exclusive—they complement each other. A cloud-first strategy can integrate edge capabilities to meet real-time needs, while an edge-first architecture can rely on the cloud for backup, coordination, or advanced insights. The key is designing workflows and systems that balance central power with local autonomy.
The Future of the Edge: Fueled by AI
How AI and Machine Learning Drive Edge Innovation
AI is a huge tailwind for edge computing. Running AI and machine learning models at the edge turns data into action instantly. Instead of sending everything to the cloud for analysis, insights are created where events happen. This reduces bandwidth use and removes delay. In autonomous systems, it means vehicles can react to changing conditions in real time. In industrial IoT, it enables machines to detect faults and optimize performance on the spot. Processing intelligence locally keeps operations faster, safer, and more efficient.
Frequently Asked Questions
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.
