Networking

Azure Load Balancer

Azure Load Balancer is a Layer 4 (TCP, UDP) load balancer that enables you to distribute traffic across multiple virtual machines or services. It provides high availability and improves application responsiveness by routing incoming traffic to healthy instances. Azure Load Balancer operates across availability zones and regions for enhanced resilience.

Key Features and Benefits

Types of Azure Load Balancers

Azure offers two primary types of Load Balancers:

  1. Azure Load Balancer (Standard SKU):
    • Provides all the features of the Basic SKU plus zone redundancy, higher throughput, and more advanced configurations.
    • Supports both Public and Internal load balancing.
    • Recommended for production workloads.
  2. Azure Load Balancer (Basic SKU):
    • A foundational load balancing service.
    • Supports only Public load balancing.
    • Limited throughput and does not offer zone redundancy.
    • Suitable for development, testing, or less critical workloads.

Core Components

How it Works

When a client sends traffic to a frontend IP address configured on an Azure Load Balancer, the following steps occur:

  1. The Load Balancer receives the incoming network traffic.
  2. It checks the configured load balancing rules to determine how to handle the traffic based on the destination IP address and port.
  3. It selects a healthy instance from the backend pool using a load balancing distribution algorithm (typically a hash-based algorithm that considers the 5-tuple: source IP, source port, destination IP, destination port, and protocol).
  4. It performs Network Address Translation (NAT) to rewrite the destination IP address and port to match the selected backend instance.
  5. The traffic is sent to the selected backend instance.
  6. The backend instance processes the request and sends a response. The Load Balancer ensures that the response is returned directly to the client (Direct Server Return - DSR) without going back through the load balancer, improving performance.

Configuring Azure Load Balancer

You can configure Azure Load Balancer using the Azure portal, Azure CLI, PowerShell, or ARM templates.

Steps in Azure Portal:

  1. Create a Load Balancer resource.
  2. Configure Frontend IP configurations (Public or Private).
  3. Define Backend pools by adding NICs or IP addresses of your virtual machines.
  4. Create Load balancing rules specifying the frontend IP, port, protocol, backend pool, and backend port.
  5. Configure Health probes to monitor backend instance health.
  6. Optionally, set up Inbound NAT rules or Outbound rules.

Example Scenario: Web Server Farm

Imagine you have deployed three web servers (VM1, VM2, VM3) in an Azure Virtual Machine Scale Set or as individual VMs. You want to distribute incoming HTTP traffic (port 80) across these servers:

Now, when users access http://20.10.5.20, the Azure Load Balancer will distribute the requests to one of the available web servers.

Tip: For Layer 7 load balancing (HTTP/HTTPS) with features like SSL termination, URL-based routing, and cookie-based session affinity, consider using Azure Application Gateway.

Learn More