What is edge computing

What is edge computing?


Edge computing is one of those terms that produces a reluctant nod representing some vague recognition of the term but leaves a big disconnect between the term and real world examples. So — let’s sort things out.

What is edge computing?

The term edge computing refers to the concept of placing computers hosting applications and data geographically close to the end-users and machines that interact with them – at the edge of the network.

Examples of applications that are fit to run at the edge include journal management systems in hospitals, video analytics applications in retail stores, production line control systems on factory floors. Or any applications that benefit from running close to the environment in which they are expected to contribute value.

The two histories of edge computing

To understand the impact of edge computing in modern infrastructure, it’s helpful to look at where it’s coming from. The history of modern computers starts with the mainframe and time-sharing era of the 1950s and 1960s, where large computers ran batched calculations in labs. Through micro-computers for individual use in homes and workplaces in the 1970s. The microcomputers eventually led to the Personal Computer revolution in the 1980s. And with the massive sea change brought about by the internet, we eventually ended up with cloud computing starting in the 90s, where the compute and storage resources of massive amounts of computers in centralized locations are made available on demand without active management by the user.

As the 2010s rolled around there was a gradual increase in usage of the term “edge compute” to capture the increased focus on the value of running applications for data processing closer to the users and machines.

Stepping back, we can think of the history of what we now call edge computing from two angles.

  • The edge that has always been around One is about the fact that we have done various types of computations across locations for a long time, and by connecting those computational machines to the network they became edge computing.
  • Edge as the answer to the increase in internet traffic The other is to think about edge computing as a way of making applications that were born on the internet more scalable and therefore more valuable also at the edge.

The edge that has always been around

Throughout history there has always been the need to do computations across many locations. An example is the cash register. Patented in 1883 it came about to help with the addition of items and to produce a printed record of sales transactions in the form of a receipt to avoid embezzlement. And 90 years later, in 1973, the first Electronic Cash Register (ECR) was installed with networking capabilities. This meant that the computational machine that had been plodding along disconnected all those years had now found a way to phone home and share data with a central location.

Needless to say, James Ritty, the inventor of the first cash register (beautifully named the “Incorruptible Cashier”) did not think of his invention as edge computing.

Edge as the answer to the increase in internet traffic

The specific label “edge computing” itself came about to describe a technical solution in the 1990s to what was then a problem based on the explosive growth of internet traffic and the web.

As the web became mainstream, the people tasked with operating web sites experienced what they called the “flash crowd” problem. The traffic load from the ever increasing amount of web site visitors exceeded the capacity provided by commercially available servers and resulted in sites crashing or serving the web content very slow.

A startup called Akamai came up with a solution. Their idea was to place the web copies of the content on many servers closer to users and serve a subset of the user population from each location. By putting the content close to the “edge” of the internet, they could provide faster service for users by serving the content from nearby servers.

Benefits of edge computing

The main benefits of edge computing comes out of the inherent properties of an architecture that places computers close to users and sources of data. With the snowballing growth of decentralized data sources there are a number of follow-on challenges around privacy, autonomy, performance, and economics that are addressed by such an architecture.

  • Local requirements on data privacy and residency can be hard to meet using data processing located in central locations. Placing the data collection and processing in the appropriate locales meets these requirements and allows for more precise rules on exactly which data to eventually export for additional processing and decisions.
  • Many transactions done at the edge of the network must be designed to survive prolonged infrastructure outages. The risk of business impacting outages is significantly reduced by putting the business critical features on the premises and close to the transaction.
  • Insights and predictive analysis are for many businesses needed in near-real time on data collected at the edge. Placing compute resources close to the data source brings low latency, high bandwidth and data offload as well as trusted computing and storage.
  • The cost of transporting data collected at the edge for central processing can be significantly reduced, not the least following the explosive increase in such data. The cost of transporting high-volume data, like high-definition video, actually comes with a cost model that is not aligned with the value of the result. By processing the data locally, and generating valuable results without the cost of transport, the costs are brought in line with expectations

Example use cases

RetailML/AI based in-store applications e.g. video analytics, inventory management and digital wayfinding. Integrated omnichannel approach. Improved customer experience with AR solutions and self-checkouts.
Industry & manufacturingDeep insights and forecasting analysis from production lines in near-real time. Improved efficiency with ML/AI video analysis.
HealthcareActivity tracking to ensure sufficient staffing and supply levels. Autonomous operations without vulnerability for connectivity disruption.
EnergyDeep insights and predictive analysis from on site operations in near-real time. Personnel allocation with ML/AI video analysis.
TelcoImproved data privacy to meet enterprise, government and telecom industry specific compliance requirements.
Public sectorEasier accomplishment of GDPR compliance and delicate data management.

Different layers of edge computing

Perhaps the most confusing aspect of the term edge computing is that it is a broad enough concept that it is used to describe a very wide variety of real-world scenarios. We can make a simple classification based on the size and purpose of the compute locations.

Regional edge

A regional edge location is an extension of the central cloud in that it is managed and offered using a cloud operating model. We can think of these locations as smaller versions of the very large hyper-scaler data centers that offer the exact same kind of infrastructure services, just with a little less compute and in more places.

Compute edge

The compute edge can be said to be as far as general computer platforms go, i.e. computers running mainstream operating systems, and are configured to be able to run a multitude of applications at the same time. These types of locations are small enough that they can’t offer all the services provided by the central and regional clouds, and usually have hard limits on the available compute and storage.

Far edge and IoT

The far edge can be said to comprise all the smart devices connected to the physical world and sending and receiving large amounts of data to and from devices for processing and analysis. Another term for these kinds of systems of physical devices is the Internet of Things (IoT).


Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases and event invites, and you can opt out at any time.