Azure Load Balancing

Last updated: October 26, 2023

Table of Contents

Introduction to Azure Load Balancing

Azure Load Balancing is a cloud-native solution that distributes incoming application traffic across multiple virtual machines or other backend resources. This enhances the availability and responsiveness of your applications. By distributing the load, you ensure that no single resource becomes overwhelmed, leading to a more stable and performant user experience.

Azure Load Balancing provides a range of services designed to meet diverse traffic management needs, from Layer 4 to Layer 7, and from global to regional. Understanding these services is crucial for designing resilient and scalable applications on Azure.

Types of Load Balancers

Azure offers several load balancing services, each suited for different scenarios:

Azure Load Balancer

Azure Load Balancer is a Layer 4 (TCP/UDP) load balancer that distributes traffic based on IP address and port. It is highly available, scalable, and can provide low-latency, high-throughput traffic distribution. It's ideal for load balancing network traffic to VMs within an Azure region.

Note: Azure Load Balancer operates at the network level, making it efficient for TCP and UDP traffic.

Azure Application Gateway

Azure Application Gateway is a Layer 7 (HTTP/HTTPS) load balancer. It offers advanced routing capabilities such as SSL termination, cookie-based session affinity, URL-based content routing, and the ability to redirect traffic. It is designed for web applications.

Azure Application Gateway Architecture

Visual representation of a typical Application Gateway deployment.

Azure Front Door

Azure Front Door is a global Layer 7 load balancer that leverages the Microsoft global edge network. It provides dynamic application acceleration, global HTTP load balancing, DNS-based traffic management, and secure access to your applications. It's ideal for applications that need high availability and performance across multiple regions.

Key Feature: Front Door excels at global traffic distribution and improving application responsiveness worldwide.

Key Concepts in Load Balancing

Understanding the fundamental components of Azure Load Balancing is essential for effective configuration:

Backend Pools

A backend pool is a collection of virtual machines or instances that will receive the incoming traffic. You define which resources are part of your load balancing service.

Health Probes

Health probes are used to determine the health of the backend resources. The load balancer periodically checks the health of each instance and only directs traffic to healthy instances. If an instance fails a probe, it is temporarily removed from the pool.


// Example of a TCP health probe configuration snippet
"properties": {
    "protocol": "Tcp",
    "port": 80,
    "intervalInSeconds": 5,
    "numberOfProbes": 2
}
            

Load Balancing Rules

Load balancing rules define how traffic is distributed to the backend pool. They specify the frontend IP address and port, the backend IP address and port, and the protocol.

Listeners

For Application Gateway and Front Door, listeners define the port, protocol, host, and frontend IP address on which the load balancer listens for incoming requests.

Routing Rules

Routing rules in Application Gateway and Front Door determine how incoming requests are directed to specific backend pools based on criteria like URL path or hostname.

Scenario Examples

Best Practices

Getting Started: You can start configuring Azure Load Balancer, Application Gateway, or Front Door through the Azure portal, Azure CLI, or Azure PowerShell.

For detailed configuration guides, please refer to the official Azure Load Balancing Documentation.