Moving Beyond Docker Compose and Helm Charts: Purpose-Built for the Edge
When we talk to customers exploring the Avassa Edge Platform, one question we sometimes hear is: “What language do you use to define workloads? Is it Helm Charts? Is it Docker Compose?” It’s a fair question — workload definition formats shape how developers describe, deploy, and maintain their applications.
The short answer is that Avassa uses a schema inspired by Docker Compose, simple and familiar, but extended to cover the additional requirements of automated edge use cases. Features that in other solutions would require custom scripting or ad-hoc tooling are expressed natively in the Avassa workload definition.
In this post, we’ll walk through the different approaches to workload definition, highlight their different purposes, and explain why we chose our path. And if you’re coming from Helm or Docker Compose, we’ll show that adopting the Avassa way is less of a leap than it may seem and will solve the edge use case.
It’s worth emphasizing that this is not about declaring one format “better” than the others. Each was designed for a very different purpose:
- Docker Compose for simple development experiments on a single host
- Helm Charts for packaging and deploying workloads into a Kubernetes cluster
- Avassa application specs for running applications consistently across distributed edge sites
This isn’t a “Go vs. Rust” debate. The goal here is to show the value of a language designed specifically for the edge — one that is declarative, expressive, and built with distributed environments in mind.
Helm Charts and Docker Compose at a Glance
When people ask about workload definition languages, the two most common references are Helm Charts (in the Kubernetes ecosystem) and Docker Compose (in the container developer ecosystem). Both aim to simplify application deployment, but they work in quite different ways.
Docker Compose
Docker Compose was designed for developers running multi-container applications on a single machine. It’s a direct configuration format: you describe your services, images, volumes, and networks in a single YAML file, and docker-compose up brings it to life. It’s simple and immediate, but limited in scope: there’s no native concept of fundamental edge features like secrets management, flexible ingress networking, and edge site-specific variables.
💡 With that said, Compose is too limited to cover the edge features that the Avassa platform provides.
Helm Charts
Helm is the package manager for Kubernetes. A Helm Chart defines a set of Kubernetes manifests (Deployments, Services, ConfigMaps, etc.), but instead of editing those manifests directly, you provide values in a values.yaml file. The chart templates then render the Kubernetes resources based on those parameters.
This approach works well in centralized IT environments, where reuse and parameterization bring high value. The Helm ecosystem offers a vast library of pre-built charts, making it easy to deploy common workloads like databases, monitoring tools, or messaging systems. For these scenarios, the availability of examples and community support keeps the learning curve reasonable.
However, Helm is tightly bound to Kubernetes and lacks edge-oriented features. It has no concept of site-specific variables, distributed secrets, or offline-first operation. In practice, charts are most often reused for generic IT workloads, whereas edge developers frequently need to package their own applications for deployment across distributed sites. In that context, the indirection of templates and parameter files becomes a fairly complex process, in contrast to a more straightforward, concrete representation of the application with edge capabilities.
Kubernetes is a flexible framework where you typically assemble a collection of different projects, for service mesh, monitoring, logging, and more. This modular approach makes resource definitions and Helm charts intentionally open-ended and extensible, so they can be adapted to the wide variety of add-ons and integrations available.
Avassa takes a different path. By embedding key features and making deliberate design choices, the platform enables the automation and simplicity that edge environments demand. This directly shapes how workloads are defined: application behavior can be expressed clearly and concisely, leveraging the capabilities of the underlying platform without requiring external add-ons. In taking this approach, Avassa is not alone, AWS Greengrass also defines its own edge-specific artifact format rather than relying on Helm or Compose, recognizing that edge environments require a purpose-built abstraction.
💡 Charts remain open-ended Kubernetes templates, while Avassa application specs are tightly integrated with built-in edge features, making workload definitions direct, expressive, and edge-ready.
Examples
This Docker Compose defines an NGINX container and exposes it on port 8080:
version: "3.8"
services:
web:
image: nginx:latest
ports:
- "8080:80"
db:
image: postgres:13
environment:
POSTGRES_PASSWORD: example
When you run run docker-compose up -d, Compose starts a container locally with that configuration.
With Helm, you typically don’t edit the Kubernetes manifests directly. Instead you use a Chart (e.g., the bitnami/nginx chart), which contains templated YAML files. You then customize it via values.yaml.
Helm Chart example, the values file:
replicaCount: 2
image:
repository: nginx
tag: latest
service:
type: ClusterIP
port: 80
resources:
limits:
cpu: 100m
memory: 128Mi
Deployment template (part of the Chart, simplified)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nginx.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "nginx.name" . }}
template:
metadata:
labels:
app: {{ include "nginx.name" . }}
spec:
containers:
- name: nginx
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.service.port }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
In short: Docker Compose is direct and developer-friendly but not built for fleet-scale production. Helm Charts are powerful in Kubernetes ecosystems, but introduce layers of templating and complexity. Both shaped how teams think about workload definition, and both leave gaps that become very visible in distributed edge environments.
Both Docker Compose and Helm stop at the boundary of a single host or a single Kubernetes cluster. Compose can only ever target the machine where you run it, and Helm will only deploy into the cluster your current context points to. Neither language addresses the challenge of defining a workload once and deploying it across many sites, especially when those sites have intermittent connectivity, different hardware profiles, or segmented networks.
The Avassa Application and Deployment Specification
In Avassa there are two artifacts:
- An application spec (what to run), and
- A deployment spec (where/how to roll it out), including a canary phase followed by a rolling update.
Note, an Avassa deployment specification has a totally different purpose than a Helm Deployment. An Avassa Deployment declares on which edge sites a workload should be deployed. No similar artifact exists in charts.
An Avassa example application specification could look like (not complete for simplicity):
name: nginx-web
version: 1.0
services:
- name: web
mode: replicated
replicas: 1
variables:
- name: username
value-from-vault-secret:
vault: config-${SYS_SITE} # fetch a secret from vault ending with sitename
secret: credentials
key: username
containers:
- name: nginx
image: nginx:latest
network:
ingress-ip-per-instance: # configure ingress at the edge
protocols:
- name: tcp
port-ranges: "8080"
inbound-access:
default-action: allow
The Avassa Edge Platform is purpose-built to manage not only edge applications, but also the configuration, secrets, and artifacts required to run them reliably at the edge. To expose these capabilities to developers in a simple and consistent way, the application specification serves as the developer interface. A dedicated format is therefore essential: adopting plain Compose or Helm would have hidden or fragmented many of the edge-native features. With the Avassa specification, those features are expressed directly in the workload definition, making the edge both powerful and approachable for developers.
Let us now illustrate a couple of edge native straightforward expressions in the Avassa application specification.
Examples of Avassa application specifications with edge awareness
Site aware variables in configuration
When running applications across many locations, there is great value in keeping the configuration as consistent as possible. At the same time, deployments often require small, site-specific differences, for example, issuing a unique client certificate for each site.
The Avassa Edge Platform addresses this by providing a set of built-in variables that can be referenced when instantiating application configurations and certificates. This allows you to maintain a single, reusable specification while still adapting automatically to the context of each site.
For instance, the snippet below demonstrates how to issue client certificates with a common name derived from the site’s name: s3-client-<site name>.
name: s3-client-cert
auto-cert:
issuing-ca: s3-ca
refresh-threshold: 2d
ttl: 7d
align-to-midnight: false
truncate-ttl: false
host: s3-client-${SYS_SITE}
cert-type: client
allow-image-access:
- "*"
AOften applications often forward data to a central location/cloud, e.g. using MQTT. In these scenarios it common to enrich the data with the source, e.g. the site name.
In the sample Mosquitto config below the topics are forwarded and the site name is prepended to the topic path.
topic sensor/# out 0 "" ${SYS_SITE}/
Lifecycle managing distributed secrets
Lifecycle management of secrets is often overlooked and it’s easy to distribute secrets to remote locations, but how do you clean up secrets that are not longer used?
In the Avassa Edge Platform, you typically tie your vaults to an application deployment. The system ensures the secrets are distributed to sites where needed, but also automatically cleans them up when not needed.
name: mosquitto-vault
distribute:
deployments:
- mosquitto
Network ingress and egress
By default in an Avassa system, application containers can’t go out on the network nor be reached from the network. When either ingress, egress or both are needed, it’s explicitly declared in the application specification. Below a client on the 192.0.2.0/24 can access the application on port 8883. The application itself can connect to any network on the outside.
network:
ingress-ip-per-instance:
protocols:
- name: tcp
port-ranges: "8883"
inbound-access:
default-action: deny
rules:
192.0.2.0/24: allow
outbound-access:
allow-all: true
And to the missing piece, an Avassa Deployment specification, where should the application run:
name: nginx-dep
application: nginx
application-version: 1.1
placement:
match-site-labels: region = North
deploy-to-sites:
canary-sites:
match-site-labels: canary
canary-healthy-time: 1h
sites-in-parallel: 50
healthy-time: 1h
This specification tells the Avassa orchestrator to find all edge sites labelled with “region = North”, this may result in thousands of edge sites, each an autonomous Avassa Edge site (cluster). The application will be carefully roll-out, first a test/canary phase with all sites with label canary, then 50 sites at a time with a wait interval of 1h. Any failure will stop the rollout.
Learn more about Avassa Application specifications:
- Writing an application: https://docs.avassa.io/tutorials/writing-an-application
- Defining an application demo: https://youtu.be/tr35M13eKyQ?si=1bW0OZxHk_Hnutai
When discussing workload definitions, it’s useful to step back and recognize that these formats are really examples of Domain-Specific Languages (DSLs). DSLs are designed to capture the essence of a particular problem domain in a compact and expressive way, helping developers focus on what matters most. As Martin Fowler notes, DSLs “capture domain knowledge in a form that developers can work with directly” (Domain-Specific Languages). In that sense, the Avassa application specification is the DSL for the edge domain: simple and familiar like Docker Compose, but extended to express the unique requirements of distributed, offline-first environments. This purpose-built DSL gives edge developers a declarative and straightforward way to define workloads, without stitching together external tools to cover gaps.
Why the Avassa Way Is a Low-Chore Choice
The Avassa application specification is purpose-built for the edge: compact enough to stay readable, yet expressive enough to capture the full set of requirements for distributed environments. By separating application definition from deployment across sites, it provides clarity and flexibility that general-purpose formats simply don’t offer.
For teams coming from Docker Compose or Helm, migrating is a quick and low-chore process. The core concepts will feel familiar, and Avassa provides a feature mapping guide to make the transition even smoother.
The real payoff comes in the long run. With Avassa, edge automation doesn’t depend on stitching together external tooling or components. Instead, you get faster turnaround for edge-specific features, streamlined operations, and a consistent way of working across thousands of sites.
💡 Cheat sheet
| Docker Compose | Simple and developer-friendly, but single-host and not production/fleet aware. |
| Helm Charts | Powerful for packaging in Kubernetes, but cluster-scoped, complex, and still missing edge-specific semantics. |
| Avassa Application Spec | Combines Compose-like simplicity with edge-native primitives: site-awareness, secure secrets/config distribution, ingress/egress rules, bandwidth policies, and offline-first resilience. |