Simplified container application management at the far edge with Red Hat and Avassa
The evolution of cloud technology has enabled organizations to streamline and automate their software development lifecycle, as well as scale resources flexibly and efficiently. By reducing infrastructure complexity and standardizing development processes, it has accelerated the deployment of innovative features and increased responsiveness to customer demands. Similar advancements are now taking place in edge computing, allowing the distributed and place-specific configuration of edge sites to provide outage resilience and fast performance to applications that need it.
Running applications in the optimal location
Production-grade small-footprint hardware for edge environments is now commercially available at scale, with a wide variety of configurations including capable GPU solutions. This allows organizations to distribute workloads to the optimal location and expand beyond the “everything, always in the cloud”-paradigm to harness offline capabilities, better scaling for bandwidth-heavy applications, and faster response times.
In some verticals, including factory automation, oil and gas, retail, and quick-serve restaurants, this means deploying and managing applications in many thousands of locations, where each location may be as small as a single-node rugged server. In contrast with regional availability zones that are sometimes referred to as the regional edge, these types of distributed environments are commonly referred to as the far edge.
Compared to legacy embedded systems with monolithic operating systems and tightly coupled applications, these types of general-compute platforms have paved the way for mainstream software stacks including Linux and container runtimes. This advancement marks the start of the “edge clouds” era, where the distributed nature of location-specific hardware configuration can be made available to application teams with the same user-friendly format, security capabilities, and automation as they’ve come to expect from central clouds.
Every application layer relies on an underlying OS layer
At Avassa, we have been involved in numerous edge computing projects, helping customers find a complete solution spanning a choice of hardware platform, operating system, and application orchestration that meets their requirements for software at the edge. We know that fully automated life cycle management across the stack is a key requirement to ensure customers can automatically deploy and update hardware, Linux, and applications without requiring manual operations on-site.
During our customer engagements, we found rising demand for a Linux distribution with strong centralized automation and lifecycle features, including Day 2 operations and vulnerability management. That’s why we were delighted to see the general availability of Red Hat Device Edge.
Red Hat Device Edge is built on Red Hat Enterprise Linux, the world’s leading enterprise Linux platform, and includes Red Hat Ansible Automation Platform to provide a consistent platform designed for resource-constrained environments which require small form factor compute at the device edge. Red Hat Device Edge with Red Hat Enterprise Linux and Podman offers a stable Linux distribution, based on rpm-ostree, that allows for the building of customized Red Hat Enterprise Linux images and enables centralized management through the Hybrid Cloud Console. With the rise of Podman as the container runtime of choice, we knew Red Hat Device Edge would provide a manageable platform to run on, designed for operationally scaling out to the edge domain.
Choosing the right tool for the task
While the comprehensive level of automation provided by a well-designed edge cloud hides the complexity of the underlying infrastructure for users, it needs to be installed and managed at scale. Managing infrastructure over hundreds or even thousands of locations is subtly but importantly different, as tasks such as inventory management and rolling upgrade cycles can become daunting for teams in charge of providing a robust and secure platform for applications over time.
Each component across hardware, operating system, and container layer requires an automated means of installation, upgrades, and health monitoring that preferably fits into existing IT tools and processes. Plus, the solution must provide context-rich and actionable insights into the health of the components to support stringent service-level objectives for business-critical applications running at the edge.
The Avassa Control Tower manages remote hosts, including OS upgrades, and provides extensive container and VM-based application lifecycle management. This includes broad Day 2 operations features including distributed logging and observability tooling as well as application-centric health monitoring.
The Avassa agent integrates with Podman on edge hosts to provide secure call-home features for rapid onboarding and to eliminate operational tasks at remote sites. It adds robustness features including local logging facilities and local secrets management for offline capabilities.Avassa’s built-for-edge solution platform combined with Red Hat Device Edge, Red Hat Enterprise Linux, and Podman provides a comprehensive, easy-to-use, and secure solution for enterprises looking to host and manage container applications across their far-edge infrastructure. With it, customers can extend cloud investments to edge lifecycle management and achieve the stability and security they require.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.
