Azure Load Balancer
Azure Load Balancer is a Layer 4 (TCP, UDP) load balancer that enables you to distribute traffic across multiple virtual machines or services. It provides high availability and improves application responsiveness by routing incoming traffic to healthy instances. Azure Load Balancer operates across availability zones and regions for enhanced resilience.
Key Features and Benefits
- High Availability: Distributes traffic to ensure your applications remain available even if individual instances fail.
- Scalability: Handles fluctuations in traffic by distributing load across a pool of resources.
- Health Probes: Continuously monitors the health of backend instances and directs traffic only to healthy ones.
- Layer 4 Load Balancing: Operates at the transport layer, making decisions based on IP addresses and ports.
- Regional and Zone Redundancy: Offers options for resilience across availability zones and even regions (for Public Load Balancer).
- Internal and Public Facing: Can be used for both internal-only applications and public-facing services.
- Network Address Translation (NAT): Supports inbound NAT rules to direct external traffic to specific internal resources.
Types of Azure Load Balancers
Azure offers two primary types of Load Balancers:
-
Azure Load Balancer (Standard SKU):
- Provides all the features of the Basic SKU plus zone redundancy, higher throughput, and more advanced configurations.
- Supports both Public and Internal load balancing.
- Recommended for production workloads.
-
Azure Load Balancer (Basic SKU):
- A foundational load balancing service.
- Supports only Public load balancing.
- Limited throughput and does not offer zone redundancy.
- Suitable for development, testing, or less critical workloads.
Core Components
- Frontend IP Configuration: The IP address(es) that clients connect to. This can be a public IP for internet-facing services or a private IP for internal services.
- Backend Pool: A collection of virtual machine network interfaces (NICs) or IP addresses to which traffic is directed.
- Load Balancing Rules: Define how incoming traffic from a frontend IP and port is directed to a backend pool on a specific backend port.
- Health Probes: Used to check the health of backend instances. If an instance fails a health probe, it is removed from the load balancing rotation.
- Inbound NAT Rules: Maps a specific external IP address and port combination to a specific virtual machine and port inside your virtual network.
- Outbound Rules: Allows you to define outbound connectivity for your backend instances.
How it Works
When a client sends traffic to a frontend IP address configured on an Azure Load Balancer, the following steps occur:
- The Load Balancer receives the incoming network traffic.
- It checks the configured load balancing rules to determine how to handle the traffic based on the destination IP address and port.
- It selects a healthy instance from the backend pool using a load balancing distribution algorithm (typically a hash-based algorithm that considers the 5-tuple: source IP, source port, destination IP, destination port, and protocol).
- It performs Network Address Translation (NAT) to rewrite the destination IP address and port to match the selected backend instance.
- The traffic is sent to the selected backend instance.
- The backend instance processes the request and sends a response. The Load Balancer ensures that the response is returned directly to the client (Direct Server Return - DSR) without going back through the load balancer, improving performance.
Configuring Azure Load Balancer
You can configure Azure Load Balancer using the Azure portal, Azure CLI, PowerShell, or ARM templates.
Steps in Azure Portal:
- Create a Load Balancer resource.
- Configure Frontend IP configurations (Public or Private).
- Define Backend pools by adding NICs or IP addresses of your virtual machines.
- Create Load balancing rules specifying the frontend IP, port, protocol, backend pool, and backend port.
- Configure Health probes to monitor backend instance health.
- Optionally, set up Inbound NAT rules or Outbound rules.
Example Scenario: Web Server Farm
Imagine you have deployed three web servers (VM1, VM2, VM3) in an Azure Virtual Machine Scale Set or as individual VMs. You want to distribute incoming HTTP traffic (port 80) across these servers:
- Frontend IP: A public IP address (e.g., 20.10.5.20).
- Backend Pool: Contains the NICs of VM1, VM2, and VM3.
- Load Balancing Rule:
- Frontend IP: Public IP (20.10.5.20)
- Protocol: TCP
- Frontend Port: 80
- Backend Pool: Your Web Server Pool
- Backend Port: 80
- Health Probe: TCP probe on port 80
Now, when users access http://20.10.5.20
, the Azure Load Balancer will distribute the requests to one of the available web servers.