Azure Load Balancer
Distribute network traffic and provide high availability for your applications.
Understanding Azure Load Balancer
Azure Load Balancer is a Layer 4 (TCP, UDP) load balancer that enables you to distribute network traffic from incoming requests to a pool of backend resources, such as virtual machines or virtual machine scale sets. It can handle millions of requests per second and provides high availability for your applications by detecting and automatically failing over to healthy instances.
Key benefits include:
- High Availability: Ensures your application remains available even if one or more backend instances fail.
- Scalability: Allows you to scale your application horizontally by adding more backend instances.
- Performance: Distributes traffic efficiently across healthy instances, optimizing resource utilization.
- Layer 4 Load Balancing: Operates at the transport layer, making it suitable for a wide range of applications.
Types of Azure Load Balancer
Azure Load Balancer offers two main types:
Public Load Balancer
A Public Load Balancer provides load balancing for internet-facing applications. It has a public IP address that acts as the single point of contact for clients. Traffic is directed to VMs within a virtual network.
Internal Load Balancer
An Internal Load Balancer (also known as a Private Load Balancer) is used to load balance traffic within a virtual network. It uses a private IP address for the frontend configuration and is ideal for internal applications or multi-tier applications where only specific tiers need to be exposed internally.
Outbound Load Balancing
Azure Load Balancer also provides outbound connectivity for virtual machines. It enables multiple VMs to share a single public IP address for outbound traffic, which is crucial for managing public IP address consumption.
Load Balancer SKUs
Azure Load Balancer comes in different SKUs, offering varying capabilities:
| SKU | Features | Use Case |
|---|---|---|
| Standard | Highly available, secure, multi-tenant, and performant. Supports Availability Zones. Advanced diagnostics. | Production workloads requiring high availability, performance, and advanced features. |
| Basic | Entry-level load balancing. Less performant and fewer features than Standard. | Development, testing, or non-critical workloads. |
It's recommended to use the Standard SKU for production environments due to its enhanced features and reliability.
Key Concepts
Understanding these core concepts is essential for configuring Azure Load Balancer:
- Frontend IP Configuration: The IP address and port combination that clients connect to. This can be public or private.
- Backend Pool: A collection of virtual machines or VM scale sets that will receive the incoming traffic.
- Load Balancing Rule: Defines how traffic is distributed. It maps a frontend IP configuration and port to a backend pool and port.
- Health Probe: A mechanism to check the health of backend instances. If an instance fails the health probe, it's removed from the load balancing rotation.
- Outbound Rules (for Standard SKU): Define how outbound traffic from the backend pool is directed.
Configuring Azure Load Balancer
Configuring a Load Balancer typically involves the following steps:
- Create a Load Balancer resource: Choose the SKU (Standard or Basic), type (Public or Internal), and optionally enable Availability Zones.
- Configure Frontend IP: Assign a public or private IP address.
- Configure Backend Pool: Add your virtual machines or scale sets to the backend pool.
- Configure Health Probes: Define the protocol, port, and interval for health checks.
- Create Load Balancing Rules: Specify the frontend IP, protocol, port, backend port, and associate it with the backend pool and health probe.
- Configure Outbound Rules (if applicable): Define outbound connectivity for Standard Load Balancers.
Example Scenario: Web Server Load Balancing
Imagine you have three web servers (VM1, VM2, VM3) in a virtual network. You want to distribute incoming web traffic (HTTP on port 80) to these servers and ensure that if one server goes down, traffic is automatically sent to the others.
- Frontend IP: A public IP address (e.g., 20.1.1.1).
- Backend Pool: VM1, VM2, VM3.
- Health Probe: TCP probe on port 80, checking every 5 seconds.
- Load Balancing Rule:
- Frontend IP: 20.1.1.1
- Frontend Port: 80
- Backend Port: 80
- Protocol: TCP
- Backend Pool: {VM1, VM2, VM3}
- Health Probe: TCP probe on port 80
Troubleshooting Load Balancer Issues
Common issues and troubleshooting steps include:
-
No traffic reaching backend instances:
- Verify frontend IP and port configuration.
- Check if backend instances are healthy according to the health probe.
- Ensure Network Security Groups (NSGs) and firewalls on backend VMs allow traffic on the backend port.
- Confirm the load balancing rule is correctly configured.
-
Health probes failing:
- Ensure the specified port is open and listening on backend instances.
- Verify the protocol used by the health probe is supported and correctly configured.
- Check NSGs to ensure health probe traffic can reach the backend instances.
-
Performance issues:
- Consider upgrading to the Standard SKU for better performance.
- Ensure your backend instances are adequately sized and configured.
- Monitor network traffic and resource utilization on backend VMs.
Azure Load Balancer provides diagnostic tools that can be configured to collect metrics and logs, which are invaluable for troubleshooting.