Azure Load Balancer Overview
Azure Load Balancer is a highly available, scalable service that distributes incoming traffic among a pool of backend resources, such as virtual machines. It provides Layer 4 (TCP/UDP) load balancing, high availability, and network address translation (NAT).
In this article:
- Provides high availability and application scalability.
- Operates at the transport layer (Layer 4).
- Supports both public and internal load balancing scenarios.
- Offers features like health probes and session persistence.
What is Azure Load Balancer?
Azure Load Balancer is a regional service that distributes incoming traffic across a set of backend resources in a cloud service or virtual machine scale set. It provides:
- High Availability: Ensures your applications remain available by directing traffic away from unhealthy instances.
- Scalability: Handles fluctuations in traffic demand by distributing it across multiple instances.
- Performance: Optimizes application performance by balancing load across resources.
- Network Address Translation (NAT): Maps public IP addresses and ports to private IP addresses and ports on backend instances.
Key Concepts
Understanding these concepts is crucial for effective use of Azure Load Balancer:
- Load Balancing Rule: Defines how traffic is distributed to backend instances. This includes frontend IP configuration, protocol, port, and backend port.
- Health Probes: Monitor the health of backend instances. If an instance fails a health probe, Load Balancer stops sending traffic to it.
- Frontend IP Configuration: The IP address and port that clients connect to. Can be public or internal.
- Backend Pool: A set of resources (VMs, VMSS, etc.) that will receive the distributed traffic.
- Inbound NAT Rules: Maps a specific frontend IP address and port to a specific backend instance and port.
- Outbound Rules: Configure outbound SNAT for backend instances.
Features
Azure Load Balancer offers a rich set of features:
- Layer 4 Load Balancing: Distributes TCP and UDP traffic.
- High Availability: Built-in redundancy for a robust service.
- Scalability: Scales automatically to handle varying traffic loads.
- Session Persistence (Sticky Sessions): Ensures requests from a client are directed to the same backend instance.
- Health Monitoring: Detects and reroutes traffic from unhealthy instances.
- Support for Internal and Public Load Balancing: Choose between private IP addresses for internal access or public IP addresses for internet-facing applications.
- Port Forwarding: Allows external access to specific services running on backend VMs.
- Load Balancer SKU Options: Standard and Basic SKUs offer different feature sets and capabilities.
Scenario Examples
Azure Load Balancer is ideal for scenarios such as:
- Web Server Farms: Distributing incoming HTTP/HTTPS traffic to multiple web servers.
- Application Tiers: Load balancing traffic between different tiers of an application (e.g., web tier to application tier).
- Database Clusters: Directing read/write operations to available database instances.
- Internal Applications: Providing high availability for applications accessed only within your virtual network.
How it Works
When a client request arrives at the Load Balancer's frontend IP address and port, Load Balancer performs the following actions:
- Rules Evaluation: It checks its configured load balancing rules to determine how to handle the incoming traffic based on protocol and port.
- Health Check: It verifies the health of the backend instances using configured health probes.
- Distribution: Based on the load balancing rule and the health of the backend pool, it selects an available healthy backend instance using a chosen load balancing algorithm (typically hash-based).
- NAT: It performs Network Address Translation, mapping the frontend IP and port to the selected backend instance's private IP and port.
- Traffic Forwarding: The request is forwarded to the chosen backend instance.
- Return Traffic: Return traffic from the backend instance is typically sent directly to the client (Direct Server Return - DSR), bypassing the Load Balancer for efficiency.
Supported Protocols
Azure Load Balancer supports the following protocols:
It operates at Layer 4 of the OSI model.
Pricing
Azure Load Balancer pricing is based on the SKU used (Basic or Standard) and the number of load balancing rules and data processed. For detailed pricing information, please refer to the Azure Load Balancer pricing page.
Next Steps
Ready to deploy Azure Load Balancer? Explore these resources: