Just like the cloud, at your edge
Delightfully easy container application management at
Applications are placed at the edge to be outage-resilient, and provide blazing fast data processing. Now is the time to seamlessly extend deployment and monitoring capabilities to the edge.
Avassa at Edge Field Day 1
The first-ever Edge Field Day is finally here and Avassa is first up. Jump straight into the live stream to watch a jam-packed demo.
In this demo, we follow a platform engineer and an application developer on their quest to orchestrate and monitor container applications running on on-site edge infrastructure.
WHAT WE DO
Extend your cloud operating model to edge applications
With the ability to reuse the things you grew to love in a modern cloud environment, you don’t waste time on reinventing your trusted tools and processes also at the edge.
Avassa bridges the gap between modern containerized applications development and operations and the distributed infrastructure. By embracing automation across how you develop, test, and release applications, you can add new features and put them into production at the edge quicker than ever.
There is no turning back.
You need to move at the speed of software.
Many enterprises are well on their way to a cloud operating model. While you may have moved significant parts of your applications to one of more public clouds, many of your resources will run much better at the edge. This, however, should not limit your ability or speed for innovation. With Avassa, you can increase efficiency, and lower your overhead costs related to edge application management, by extending the central cloud tooling to the edge.
HOW IT WORKS
Be up and running faster than you can say edge cloud orchestration
You set up new sites by installing the Avassa Edge Enforcer agent on each of the hosts making up a site. The Edge Enforcer is a container application that provides features related to the management of the host-local container runtime, including call-home, cluster formation and container scheduling.
Installing the Edge Enforcer is a simple operation based on a zero-trust architecture that does not require host-specific configuration. Edge Enforcers call home to their configured Control Towers and receive all necessary configuration including keying material for site- and tenant-level encryption. The basic initial setup of Edge Enforcers can easily be automated across large numbers of sites using deployment tools like RedHat Ansible or HashiCorp Terraform.
You declaratively describe containerized applications using application specifications that define sets of containers that run together as one versioned unit. Application specifications are in YAML or JSON format so that you can manage them as code or through the Control Tower web interface.
You describe which sites an application will be deployed on using application deployment specifications. Application deployments use a simple matching language to define under which circumstances (e.g. geographical, hardware features, available resources) an application should be started, and how the application should be rolled out across the matched sites.
After applications are deployed you upgrade them and change which sites they run on by simply changing the application and deployment configuration. The platform figures out the detailed changes needed and performs them with minimal impact.
Your newly started applications need to be monitored for health and at times provide observability to detect and resolve the underlying causes of any operational issues.
The runtime status of all containers belonging to a deployment is automatically monitored both from a container runtime perspective as well as using optional health checks for application-specific commands. Any operational issues are reported from an application-centric perspective, correlating the issue with which container, belonging to which application, and in which site.
For observability, you use distributed log queries to fetch historical logs, or stream current logs from your distributed locations. You filter based on time, log content, location and many other aspects to only get the logging data relevant to your current operational issue.
WHAT MAKES US DIFFERENT?
Opinionated and user-friendly
We don’t overcomplicate things. With Avassa, it’s easy to deploy containerized applications across large-scale edge clouds in a few clicks.
you can trust
We built our platform on zero-trust with a layered security model across sites and tenants, and eliminated any time-consuming manual work.
Avassa’s platform was built with the needs of application operations at the forefront, freeing you from the massive complexities associated with infrastructure-centric solutions.
Our solution works well with
What is Edge AI and why should you use it
Edge AI is a term we hear increasingly often within the category of edge computing. In this article, we’ll look closer at the definition of Edge AI and what business-critical…
How to setup a Microsoft Azure AD, Avassa integration
This how-to describes how to configure Azure AD as an OIDC provider for a tenant in an Avassa system. This means that users in this tenant will authenticate through the…
Edge Computing ❤️ Platform Engineering
As digital transformation continues to disrupt industries across the board, platform engineering has emerged as a crucial set of practices and patterns for IT teams to enable self-service capabilities for…
DIVE INTO THE DETAILS
Looking for our platform documentation?
Deepdive into the details of how our solution works in our platform documentation.
Don’t just take our word for it
LET’S KEEP IN TOUCH
Sign up for our newsletter
We’ll send you occasional emails, and you can opt-out at any time.
GET TO KNOW US
We are Avassa
Avassa empowers businesses to bridge the gap between modern containerized applications development and operations and the distributed edge infrastructure. Based in Stockholm, Sweden, Avassa was founded in 2020 and is a privately held company funded by Fairpoint Capital and Industrifonden.