Why to use edge-native event streaming and secrets management
Most edge applications have some parts of the application running at edge sites and other parts running in a central location. Take the example of a camera-based analytics application. Image recognition and reporting of filtered results are done at edge sites, while data analysis – based on imported results from all edge sites – is done at a central location. Often, this kind of distributed edge system can take advantage of common “system services” (in contrast to user applications) , which we refer to as edge-native application services. A few examples include event streaming, secrets management, and local container registries. These services need to be integrated with the infrastructure software in each edge site, such as cluster orchestration, application scheduling, key-value storage, and monitoring.
Two approaches for implementing edge-native application services
There are two basic approaches to include such edge-native application services in an edge software stack:
- Engineered to Work Together: Edge-native application services are pre-integrated on a system level and purpose-built to work together, so that the software stack with orchestration, scheduling, clusters, security, networking, tenants, and edge-native applications services work out of the box, without involvement of application teams.
- BestofBreed: Application teams or edge site owners will typically select their favorite application services used in the cloud, even when the use case is edge computing rather than cloud computing. Hence, these application services need to be scaled down and possibly modified in other ways to become fit for purpose and manageable at the edge. They also need to be integrated with the rest of the software stack running in the edge sites, including other best-of-breed applications.
Both of these approaches have pros and cons, although we think that the Engineered to Work Together approach is more appropriate for the majority of edge computing users. Let’s take a look at why that is.
Engineered to Work Together vs Best-of-Breed
A major advantage of the “Engineered to Work Together” approach is that it avoids the least problem that often plague Best-of-Breed designs. Often features in interacting components must be compatible for the system to function as whole, but all components must also share the same design for system-level concepts such as cluster management and high-availability, multi-tenancy, system security, networking, and lifecycle management (upgrades, etc). Incompatible semantics or APIs across such components restrict functionality, reducing it to what is common across components. The more components that need to interact, the more prominent the Least Common Denominator effect becomes when using a Best-of-Breed approach. The imposed restrictions in a Least Common Denominator approach lead to a system where only a very limited subset of advanced features remain.
Keep reading: How to distribute site-specific secrets and configurations
A second major advantage of the Engineered to Work Together approach, is that it enables synchronized roadmaps and release trains on a system level. With Engineered to Work Together, the platform or stack vendor has full control over the roadmaps and release trains of the components involved. This avoids a lot of potential incompatibilities, which otherwise would be increasingly difficult to manage over time. With a Best-of-Breed approach involving several components from different sources, who has the mandate to decide on roadmaps for the various components to ensure they stay compatible with each other? It becomes harder over time, as new functionality is introduced in every component’s release, to introduce new functionality in a system using a Best-of-Breed approach.
The third major advantage of the Engineered to Work Together approach is that the application developer can rely on someone else (the platform vendor) to correctly configure the high availability and encryption functions in both event streaming and secrets management.
On the other hand, there are also arguments in favor of the Best-of-Breed approach. One such argument, although a dubious one, is that the Least Common Denominator problem does not apply to modern microservice applications that are made up of many components with open APIs in a common format, such as REST. However, a common format of the same API is no guarantee for compatibility. For example, different versions of the same API could be incompatible. Or if you look at a secrets manager and an event streaming system from different projects with incompatible tenancy models, the event streaming system must isolate topics from different tenants, but the secrets manager should allow for authentication for the same tenants.
Another argument in favor of the Best-of-Breed approach is that there are many developers that are trained in using individual components that are popular in cloud-based systems. One example would be Kafka, which is a very popular event streaming system with thousands of users. Why use a proprietary edge-native event streaming system instead? First, it is possible to design the APIs in a way that makes them quite similar to the mainstream cloud APIs (for example, the Kafka APIs). Hence, the experience is very similar that what developers are used to. Second, the edge-native system properties, which are lacking in cloud-native systems, add value in edge computing. More on this later.
Examples of edge-native application services
There are several edge-native application services that can be purpose-built to fit into a core software stack in edge environments. Some examples include:
- Secrets management: Secrets need to be distributed from a central location to specific edge applications.
- Event streaming: Central and edge applications need a way to subscribe to and publish events, metrics, logs, and application-specific messages.
Edge-native secrets management
Edge-native features of a secrets management system include:
- Local isolation of sites, so that each edge site is cryptographically isolated from all other edge sites (different encryption keys)
- Hard tenant isolation, where each tenant is encrypted with its own specific key.If the key for one tenant is broken then the keys for all other tenants are still intact.
- Fine-grained control of which secrets will be distributed to which locations. For a given tenant, its secrets will be stored at all places where that tenant is active and only at those locations.
- Audit trail logs are distributed upstream (egress on edge sites), using a stream processing system, so that incident analysis can be performed even if the edge sites cannot be reached. This might help finding out how an intrusion was initiated.
- Masking out of certain fields in logs that might contain sensitive information.
- Secure auto-unseal of edge sites using a sophisticated method to assign unseal secrets to edge sites (unseal is the process of providing the encryption key that allows the internal state to be decrypted).
- Centralized management of edge site secrets. Control over to which sites each secret is distributed, central access and audit control over secrets, and centralized rotation of secrets. Secrets are only distributed to where they are needed by an application, and access is only granted to applications that need it. Key management is decoupled from applications and there is no need to embed sensitive information in application containers.
Edge-native event streaming
Edge-native features in an event streaming system include:
- Resiliency, so that the stream processing system on an edge site will continue to work even if it loses connectivity to the outside world.
- Local persistence, so that a disconnected stream processing system will buffer posted messages and then, when connectivity is restored, deliver them in the same order as they were posted. There needs to be at least one stream processing node, with persistent storage, at each edge site to achieve this functionality.
- Application life-cycle operations (deploy, monitor, maintain health, upgrade) of hundreds, or thousands, of distributed event streaming nodes from a central location.
- Distributed queries, where queries on, data sources such as logs of events can be distributed over the edge sites. This is done using a tree organization of sites, from the top-level streaming node to the various sites where the calling tenant resides. The query is then evaluated (using filters, for example) at these sites, and the resulting data is passed back up the tree of sites. Data is merged based on timestamps and finally delivered back to the caller. Thus, this is a tool for parallel distributed search at a large number of edge sites. It is especially useful when there are multiple containers, possibly at multiple edge sites, interacting with each other or some external entity.
- Hard multi-tenancy, which is compatible with the multi-tenancy mechanism used by the rest of the system.
Other edge-native examples
Secrets management and event streaming are some of the many generic edge-native application services. Edge-native design is also called for in container registries, for example, which should be locally deployed at each edge site for high-availability reasons. These registries must also be tenant-separated as images could be considered IPR that cannot be shared with other tenants.
Keep reading: What is distributed edge application orchestration?
A common feature of almost all edge-native application services is small footprint. They are designed to run with limited CPU and memory resources, in contrast to cloud applications where this is usually not a priority. For example, popular event streaming systems such as Pulsar or Kafka, might not fit the resource requirements of edge computing, since they are designed for the cloud or other large datacenters. These systems come with their own dedicated and completely generic implementations of leader election, cluster membership, and persistent storage. With an Engineered to Work Together approach for edge-native application services, where the streaming system is integrated and purpose-built, it is possible to implement a streaming system that uses CPU and memory resources much more efficiently. Similar arguments apply, of course, to the potential for resource-constrained implementations of other edge-native application services.
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.