Azure Networking Load Balancer

Introduction to Azure Load Balancer

Azure Load Balancer is a high-performance, low-latency Layer 4 load balancer that provides a highly available and scalable service. It distributes incoming traffic among a pool of backend resources, such as virtual machines, in a way that optimizes resource utilization and ensures service availability.

It operates at the transport layer (TCP/UDP) and can perform network load balancing for TCP and UDP endpoints. Azure Load Balancer is a managed service, meaning Microsoft handles the underlying infrastructure, patching, and maintenance, allowing you to focus on your applications.

Key Benefit: Improves the availability and responsiveness of your applications by distributing traffic across multiple healthy instances.

Key Features

Azure Load Balancer offers a comprehensive set of features to meet various load balancing needs:

đŸŠč

High Availability

Automatically detects and routes traffic away from unhealthy instances to healthy ones, ensuring continuous service operation.

⚡

Scalability

Handles millions of requests per second, scaling automatically to meet demand without manual intervention.

🌐

Global Reach (with App Gateway)

While Load Balancer is regional, it's often paired with Azure Application Gateway for global traffic management.

🔒

Port Forwarding

Allows mapping of external ports to internal virtual machine ports, enabling access to services hosted on VMs.

✅

Health Probes

Configurable health probes monitor the health of backend instances, ensuring only healthy instances receive traffic.

đŸŽ›ïž

Direct Server Return (DSR)

Optimizes inbound traffic flow by allowing backend servers to respond directly to clients.

Common Use Cases

Configuration Overview

Configuring Azure Load Balancer typically involves the following key components:

1. Frontend IP Configuration

This defines the public or private IP address that clients will connect to. You can have one or more frontend IP addresses.

2. Backend Address Pool

This is a collection of virtual machines (or other Azure resources) that will receive the incoming traffic. You associate network interfaces with this pool.

3. Health Probes

You configure probes (e.g., TCP, HTTP, HTTPS) to periodically check the health of the backend instances. Load Balancer uses the results to determine which instances are healthy.

Example probe configuration:

{
  "name": "http-probe",
  "properties": {
    "protocol": "Http",
    "port": 80,
    "requestPath": "/",
    "intervalInSeconds": 5,
    "numberOfProbes": 2
  }
}

4. Load Balancing Rules

These rules define how traffic is distributed. A rule associates a frontend IP configuration and port with a backend address pool and port. You can configure floating IP (direct server return) and session persistence here.

Example rule configuration:

{
  "name": "http-rule",
  "properties": {
    "frontendIPConfiguration": {
      "id": "/frontend/ip/config/id"
    },
    "backendAddressPool": {
      "id": "/backend/pool/id"
    },
    "protocol": "Tcp",
    "frontendPort": 80,
    "backendPort": 80,
    "enableFloatingIP": false,
    "idleTimeoutInMinutes": 4,
    "probe": {
      "id": "/health/probe/id"
    }
  }
}

5. Inbound NAT Rules (Optional)

Used to forward traffic from a specific frontend IP and port to a specific backend instance and port. Useful for management ports like RDP or SSH.

Conclusion

Azure Load Balancer is a fundamental component of building resilient and scalable applications on Azure. By understanding its features and configuration, you can effectively manage application availability and performance.

For more advanced scenarios like Layer 7 load balancing, SSL termination, and Web Application Firewall (WAF) capabilities, consider using Azure Application Gateway.