Azure Load Balancer
Azure Load Balancer is a highly available and scalable cloud service that distributes incoming application traffic. It provides Layer 4 (TCP/UDP) load balancing, allowing you to distribute traffic across multiple virtual machines or container instances. This ensures high availability and responsiveness for your applications.
Load Balancer operates at the Transport layer of the OSI model. It can distribute network traffic to services hosted on Azure virtual machines. A load balancer is typically used to distribute load across a fleet of virtual machines that are not in a virtual network.
Key Features
- High Availability: Designed to be highly available, ensuring your applications remain accessible.
- Scalability: Automatically scales to handle varying traffic loads.
- Internal and External Load Balancing: Supports both public-facing and internal-only load balancing scenarios.
- Layer 4 Load Balancing: Distributes traffic based on IP address and port.
- Health Probes: Continuously monitors the health of backend resources to ensure traffic is only sent to healthy instances.
- Session Persistence: Configurable to maintain user sessions on the same backend server.
- Network Address Translation (NAT): Supports inbound and outbound NAT rules.
- Integration: Seamlessly integrates with other Azure services like Virtual Machine Scale Sets and Azure Kubernetes Service (AKS).
Types of Load Balancers
Azure offers two main types of load balancers:
1. Azure Load Balancer (Standard and Basic SKU)
This is a regional, highly available, and scalable Layer 4 load balancer. It distributes network traffic to virtual machines and container instances. It supports both public and internal IP addresses.
- Standard SKU: Offers advanced features such as availability zones, better diagnostics, and increased scale.
- Basic SKU: A simpler, cost-effective option suitable for basic load balancing needs.
2. Azure Application Gateway
While often considered a load balancer, Application Gateway is a Layer 7 (HTTP/HTTPS) load balancer. It provides more advanced routing capabilities, such as URL-based routing, SSL termination, and Web Application Firewall (WAF) integration. For content-aware load balancing, Application Gateway is the preferred choice.
How It Works
Azure Load Balancer receives incoming network traffic requests and distributes them to a pool of backend resources (e.g., virtual machines). It uses health probes to determine the health of each backend instance. If an instance fails a health probe, the load balancer stops sending traffic to it until it becomes healthy again.
Key components:
- Frontend IP configuration: Public or private IP addresses that clients connect to.
- Backend address pool: A collection of virtual machines or instances that will receive the traffic.
- Health probes: Used to monitor the health of backend instances.
- Load balancing rules: Define how traffic is distributed based on IP address and port.
- Inbound NAT rules: Map a specific public IP address and port to a specific VM and port.
Common Use Cases
- Distributing traffic across multiple web servers for improved performance and availability.
- Ensuring high availability for critical applications by automatically failing over to healthy instances.
- Creating internal load-balanced applications accessible only within your virtual network.
- Scaling out stateless applications that can handle requests from any instance.
- Distributing traffic for background services or worker roles.
Getting Started
To get started with Azure Load Balancer:
- Create a Virtual Network: Ensure you have a virtual network configured for your resources.
- Deploy Backend Resources: Deploy your virtual machines or other resources that will serve traffic.
- Create a Load Balancer: Choose either the Basic or Standard SKU and configure its frontend IP, backend pool, health probes, and load balancing rules via the Azure portal, Azure CLI, or PowerShell.
You can find detailed tutorials and documentation on the Azure portal or via the links below.
Learn More: