What differentiates modern edge computing from legacy on-premises applications?
I had an interesting conversation with a long-time friend in the IT industry a while ago. She has run IT operations for several enterprises in various domains. She has been in the IT sector long enough to have seen the cycles: legacy on-premises, VMs, cloud, and now edge computing. But unlike some colleagues, she has not rushed into the next wave without considering the benefits for the organization.
The question I got from her:
ā I have been running certain types of applications on local hardware within our retail stores for decades. Does that mean we have been āedgeā all along? What is new here?
That is certainly a valid question. Do we have an āemperor’s new clothesā syndrome?

Before moving on, we need to frame the context for the discussion by explaining what we mean by the edge in this article:
on-premises infrastructure and applications running as close to users as possible and where data is produced. This excludes other types of edges like CDNs, AWS local zones etc which are edge PoPs before your location. The retail locations of my friend had transformed from on-premises Windows applications to VMs and now lately, containers running on local Intel NUCs. Examples of relevant edge applications within this context are: Point of Sales Systems for retail, High speed video processing applications in IoT, Medical Equipment, and Smart Building Automation.
So yes, they have been doing edge computing all along, even though we have not called that edge. Applications run within the organization’s boundaries to optimize latency and availability. But, is there something missing to live up to the principles of modern edge computing?
Keep reading: Confessions of a Platform Engineer, Edge Computing roll-out edition
I started interviewing her a bit on how they managed the lifecycle of the applications at each store location. The experience varied. Regarding application installation and upgrades, there was a mix of manual operations and even on-site visits in certain cases. At some sites, they had local IT heroes that could perform trouble shooting, but at others, they had the traditional cycle of filing tickets to a central outsourced help desk and waiting for someone to troubleshoot remotely. In the latter case, there was a lot of frustration and a lengthy time to fix.
Another interesting aspect was the life-cycle management of applications; how do they install new applications, or upgrade and reconfigure them? We could see the same pattern here, sites with local IT heroes managed that well. Application management just happened based on engagement. The problems she could see with their legacy way of running edge applications were:
- no automatic and controlled rollout of applications across sites
- lack of central view of application versions
- the central help desk and operations teams had no visibility into application performance at each site
- costs in local IT staff, to some degree, hidden
So it turns out that legacy edge with on-premises applications is like to run a small IT department at each location. The total cost of ownership adds up with the number of sites, and application life cycle management tasks are manual and slow.
Can we fix this with modern edge tools and principles? How is modern edge different?
If we simplify:
š” modern edge = edge infrastructure and applications + modern cloud tools and processes
Keep reading: Edge and cloud orchestration ā same same but different (part 1 of 2)
We continued the discussion with the above definition scribbled down on a napkin. The discussion meandered a bit when confusing ācloud tools and processesā with ārunning in the cloudā. No, I said: ākeep your applications at the edge but orchestrate them centrallyā.
What we finally agreed upon was the following set of principles for āmodern edgeā:
- Centralized application deployment across locations connected to the enterprise CI/CD pipeline
- Centralized IT operations with deep visibility into each site
- Controlled application lifecycle management according to modern principles for release management, versioning, canary and rolling deployments
Those three where the main items we agreed upon she wanted to change in order to modernize their approach to edge computing. The benefits that immediately show are:
- more agile feature growth
- reduced operational cost (including removing hidden costs)
- controlled application roll-outs
- faster time to respond to site issues since the central operations team can proactively monitor each site
The phrase “edge computing” only indicates where applications are running: at the edge of a certain network topology. But it does not necessarily state the level of automation. Centralized management of edge infrastructure and applications, adopting well-known tools and processes from modern cloud computing, is key when transitioning to modern edge computing.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.