AVI/ALB Lab: Architecture


Welcome to Part 1 of the AVI/ALB Lab In the previous post, we covered the introduction to the AVI/ALB lab series.
In this post we’ll cover the architecture of the Avi load balancer solution.

Architecture Overview

The Avi load balancer is a really nice fit with NSX-T and it follows a very familiar architecture.
At a very high level it follows a very similar architecture to NSX-T.
Like NSX-T it utilises a 3 node controller cluster which houses the Management and Control plane, and then a separate data plane which runs on the Service Engines. You can think of the Service Engines (SE) as similar to an NSX Edge though the two are quite different even though the SE can also do routing!
Because of the split control and data plane across separate appliances a loss of the control plane does not affect the load balancers, SEs will still continue to function but will be in “Headless mode” even when the entire controller cluster is down. New configuration and changes to configuration are not possible, but the virtual services already on the SEs continue to function without issues.

So let’s break this down and look at each of the planes.

Control Plane

Contains the Avi controllers – this is the point of admin / maintenance / reporting of the Avi infrastructure. It requires a virtualization infrastructure to run, eg VMware, AWS etc.
Controllers can be deployed as a standalone or redundant three-node cluster.
They us a Zookeeper-like model of a three-node cluster to maintain a quorum, all controllers are active, sharding-specific workloads. Management can be performed from any controller in the cluster without knowledge of which cluster is the leader.

3 Node Cluster

When use in write access mode deployments, Controllers work with the underlying orchestrator to launch new SEs as needed. it is the Controller’s job to place virtual services on SEs to load balance new applications or increase the capacity of running applications. Information is securely exchanges between the controllers and the SEs including health, connection stats and logs. The controllers also run the console that can be interacted with via the web UI, CLI or API calls.

Data Plane

Contains Avi Service Engines – this is analogous for the ports in the physical appliances and performs the data / packet processing. Also requires a virtualization infrastructure to run. It may or may not be the same infrastructure as is hosting the controllers. Service Engines (SEs) handle all data plane operations within Avi Vantage by receiving and executing instructions from the Controller. The SEs perform load balancing and all client-and server-facing network interactions. It collects real-time application telemetry from application traffic flows.
SEs are typically grouped and can run in various HA modes that I’ll cover in a separate blog.

Deployment Options

Avi can be deployed to numerous environments for this lab we will focus on two, vCenter and NSX-T deployments. Within these deployment methods we can also define if they are write access mode or not. I will be focussing on write access modes as these will probably be the most common and they offer the best functionality and scalability.


AVI/ALB NSX-T Lab Part 2: – vCenter and NSX-T Permissions

AVI/ALB vCenter Lab

AVI/ALB vCenter Lab Part 2: –

Leave a Reply

Your email address will not be published. Required fields are marked *