Azure Load Balancer
Last updated: October 26, 2023
Overview
Azure Load Balancer is a fully managed cloud service that provides high availability and network load balancing for applications. It distributes incoming traffic across a pool of backend resources, such as virtual machines.
Key Features
- High Availability: Distributes traffic to healthy instances, ensuring your application remains available.
- Scalability: Can scale to handle millions of requests per minute.
- Layer 4 Load Balancing: Operates at the transport layer (TCP/UDP).
- Internal and Public Load Balancing: Supports both public-facing and internal applications.
- Health Probes: Monitors the health of backend instances to ensure traffic is sent only to healthy resources.
- Port Forwarding: Allows direct access to specific services running on virtual machines.
Types of Load Balancers
Azure offers two types of load balancers:
Standard Load Balancer
The Standard Load Balancer provides advanced features and is recommended for most production workloads. It offers:
- Higher scale limits.
- Availability zone support for resilience.
- Private IP address frontend configuration.
- Network Security Group (NSG) support on the backend pool.
- Direct Server Return (DSR) support.
Basic Load Balancer
The Basic Load Balancer is suitable for development and testing or for less critical workloads. It has lower scale limits and lacks some advanced features.
How it Works
Azure Load Balancer uses a hash-based distribution algorithm to distribute traffic across available backend instances. It continuously monitors the health of backend instances using configured health probes. If an instance fails a health probe, it is automatically removed from the load-balancing pool.
Example Configuration
To set up Azure Load Balancer, you typically define:
- Frontend IP Configuration: The IP address(es) that clients connect to.
- Backend Pool: A collection of virtual machines that will receive traffic.
- Load Balancing Rules: Rules that map a frontend IP and port to a backend IP and port, including the protocol and health probe.
- Health Probes: Configurations to check the health of backend instances.
Here's a snippet of what a load balancing rule might look like (conceptual):
{
"name": "myLoadBalancerRule",
"frontendIPConfiguration": {
"id": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Network/loadBalancers/myLoadBalancer/frontendIPConfigurations/frontend1"
},
"backendAddressPool": {
"id": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Network/loadBalancers/myLoadBalancer/backendAddressPools/backendPool1"
},
"protocol": "Tcp",
"frontendPort": 80,
"backendPort": 80,
"enableFloatingIP": "Disable",
"idleTimeoutInMinutes": 4,
"probe": {
"id": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Network/loadBalancers/myLoadBalancer/probes/httpProbe"
}
}
Use Cases
Azure Load Balancer is ideal for:
- Distributing traffic for web applications.
- Ensuring high availability for critical services.
- Creating scalable solutions for fluctuating demand.
- Providing internal load balancing for private networks.
Next Steps
Explore the following resources to learn more: