edge ai camera

What is Edge AI and why should you use it

Edge AI is a term we hear increasingly often within the category of edge computing. In this article, we’ll look closer at the definition of Edge AI and what business-critical challenges it can address.

What is Edge AI?

At Avassa, we see more and more deployments of what could be called “Edge AI”. What is Edge AI?

💡 Edge AI is a combination of Edge Computing and Artificial Intelligence

That means the AI algorithm (the trained model) runs on edge computing infrastructure close to the users and where the data is produced. This allows data to be processed within a few milliseconds to provide real-time feedback. Primary use cases like personal safety, industrial automation, medical data analysis, retail, and quick-serve restaurant applications require real-time responses and the capability to run without connection to the central cloud.

NVIDIA summarizes the adoption of Edge AI in the following way

Since AI algorithms are capable of understanding language, sights, sounds, smells, temperature, faces, and other analog forms of unstructured information, they’re particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralized cloud or enterprise data center due to issues related to latency, bandwidth, and privacy.

Let us walk through the evolution of AI architectures from Cloud AI to Edge AI. To be able to reason about the architecture, we need to define the basic building blocks.

  • Model: The mathematical formula that generates an output from a given data set. It is built from a training process.
  • Training: The process of updating the parameters in a model from training data. The model “learns” to draw conclusions and generalize the data. Training the model requires powerful compute.
  • Training data: A set of data to train the model to perform a certain task; examples, labeled or not, of input and output. To generate good models, you need a high volume of high-quality data.
  • Inference: The process of providing new, unseen data to a trained model to make a prediction, decision, or classification about the new data. Inference is not compute intensive.

Cloud AI

Let us first look at cloud-based AI.

In cloud AI, all the data from the edge is sent to the cloud, both training data and real-time data for inference. The model resides in the cloud where inference is performed, and the response is returned to the edge.

This architecture has several drawbacks:

  • Latency in response times.
  • Assumes connectivity to the cloud to perform its job, which is unacceptable for many scenarios like personal safety.
  • Compute cost in the cloud, running the inference engine can generate high costs vs. running them at each edge in self-managed compute.
  • Hard to scale: assume you have video streams at the edge; the network load and cost increase with each site.
  • Integrity, the data/video at the edge might contain personal information which should not leave the facility.

Edge AI

With Edge AI we push the model and inference to the edge:

The initial training is performed in the cloud. After the first training phase, the model is distributed to each edge, where the edge can perform local inference. Feedback loops are possible where the central model is updated with data from the edges, and the model is updated on the sites.

This architecture has several benefits in comparison to the cloud-centric model:

  • Low latency for fast response times
  • It can run autonomously without connection to the central cloud.
  • Less cloud compute and network costs.
  • Can scale to large edge use cases.
  • Real-time data stays on the site, which helps with integrity issues.

As Advian so well formulates it:

Edge AI speeds up decision-making, makes data processing more secure, improves user experience with hyper-personalization, and lowers costs — by speeding up processes and making devices more energy efficient.

There are two major drivers for pushing AI to the edge: requirements and enabling technologies.

Let us start with the latter:

  • Tools and libraries for neural networks have reached widespread usage and engineering maturity in standard environments. These have also reached a level where they can run on edge infrastructure.
  • Powerful compute infrastructure with GPU capabilities are now available to affordable pricing
  • Adoption of IoT devices such as cameras, LiDAR technology, sensors etc. Technology and pricing have made it possible to deploy these at large scale, a precondition for edge AI: they are the data sources.
  • Edge computing orchestration at scale is now available so that the edge infrastructure and the edge AI applications can be efficiently automated.
  • Container technology enables efficient distribution of models to the edge sites. Since we are Avassa, it is worthwhile elaborating on containers and Edge AI. Containers is the perfect tool to manage the lifecycle of AI models. First of all, the development cycle is reduced, you can spin up your training environment in minutes, and you can easily share it with the development team. Second, embedding all the dependencies in a container removes complex dependencies and configuration management at the edge. Reproducibility and accuracy are essential in AI production environments. Embedding all dependencies in the container guarantees you generate the same result in all edge locations as in your central development environment. Containers also provide a huge benefit in the small footprint and speed to start, which makes them highly useful for constrained edge environments and automation. Finally, edge container orchestration platforms give you the high-speed autobahn to distribute and update the model to all edge sites.

In summary: technical advances in hardware, machine learning, and edge container technology have paved the way for running AI models efficiently on edge devices.

In the near future, we will also see more and more training being pushed to the edge by using, for example TensorFlow Lite.

Examples of Edge AI

But technology alone does not drive new solutions. First of all, there must be a need to fulfill. Talking to our customers, we see examples like the ones below:

Manufacturing and Industrial IoT: Edge AI provides rapid collection and analysis of edge-based sensors for example, assembly lines. Manufacturers can implement automated early quality control. It saves time and money instead of using human manual inspections and, possibly even more important, gives a higher degree of early detection.

Keep reading: Why breaking free from data silos is the key to success in Industry 4.0

Mining: Industries like mining need to guarantee personal safety. AI at the edge can detect threats, give early warnings, and indicate if individuals are not wearing the required protective equipment. Autonomous vehicles are becoming increasingly common to avoid having people in the mines. These need fast autonomous AI applications onboard the truck. AI at the edge also enables a higher degree of mining processes.

Retail and restaurants: Edge AI is used both to increase the customer experience and enable check-out free shopping as well as decrease fraud. These need to run autonomously.

Keep reading: Towards an application-centric PaaS for Retail Stores

In this article, we have shown that you can solve business-critical problems with an efficient architecture for Edge AI with the following building blocks:

  1. Your favorite AI/ML software toolkit
  2. Automated CI/CD pipeline to build the AI models as containers
  3. Deployed edge infrastructure
  4. Edge orchestration solution to manage the edge sites and automatically deploy model containers to the edge

LET’S KEEP IN TOUCH

Sign up for our newsletter

We’ll send you occasional emails to keep you posted on updates, feature releases, and event invites, and you can opt out at any time.