Go SDK – Advanced Rate Limiting

← Docs Home

Overview

This guide covers advanced rate‑limiting techniques for the go-sdk. It demonstrates how to configure token‑bucket and leaky‑bucket algorithms, apply per‑endpoint limits, and combine limits with dynamic back‑off.

Key Concepts

Installation

go get github.com/example/go-sdk

Basic Example (Token Bucket)

package main

import (
    "context"
    "log"
    "time"

    sdk "github.com/example/go-sdk"
    "github.com/example/go-sdk/ratelimit"
)

func main() {
    // Create a client with a token bucket limiter: 10 req/s, burst up to 20
    limiter := ratelimit.NewTokenBucket(10, 20)
    client := sdk.NewClient(
        sdk.WithRateLimiter(limiter),
    )

    ctx := context.Background()
    for i := 0; i < 30; i++ {
        go func(i int) {
            resp, err := client.Do(ctx, sdk.Request{
                Method: "GET",
                URL:    "https://api.example.com/v1/resource",
            })
            if err != nil {
                log.Printf("req %d error: %v", i, err)
                return
            }
            log.Printf("req %d status: %s", i, resp.Status)
        }(i)
    }

    // Let the goroutines finish
    time.Sleep(5 * time.Second)
}

Advanced Example (Hierarchical & Dynamic)

package main

import (
    "context"
    "log"
    "sync/atomic"
    "time"

    sdk "github.com/example/go-sdk"
    "github.com/example/go-sdk/ratelimit"
)

func main() {
    // Global limiter: 100 req/min
    globalLimiter := ratelimit.NewTokenBucket(1.66, 5) // ≈100/min with small burst

    // Per‑endpoint limiter for /v1/heavy: 5 req/s
    heavyLimiter := ratelimit.NewLeakyBucket(5, 5)

    // Combine both using a composite limiter
    composite := ratelimit.NewCompositeLimiter(globalLimiter, map[string]ratelimit.Limiter{
        "/v1/heavy": heavyLimiter,
    })

    client := sdk.NewClient(sdk.WithRateLimiter(composite))

    // Dynamic adjustment: after 30 seconds, raise the heavy endpoint limit
    go func() {
        time.Sleep(30 * time.Second)
        newLimiter := ratelimit.NewLeakyBucket(10, 10)
        composite.UpdateEndpointLimiter("/v1/heavy", newLimiter)
        log.Println("Heavy endpoint limit increased to 10 req/s")
    }()

    ctx := context.Background()
    var cnt int64
    for i := 0; i < 200; i++ {
        go func(i int) {
            path := "/v1/normal"
            if i%10 == 0 {
                path = "/v1/heavy"
            }
            resp, err := client.Do(ctx, sdk.Request{
                Method: "GET",
                URL:    "https://api.example.com" + path,
            })
            if err != nil {
                log.Printf("req %d error: %v", i, err)
                return
            }
            atomic.AddInt64(&cnt, 1)
            log.Printf("req %d (%s) → %s (total: %d)", i, path, resp.Status, cnt)
        }(i)
    }

    // Keep process alive for demo
    select {}
}

Live Playground

Best Practices

FAQ

Can I change limits without restarting the client?
Yes. Use CompositeLimiter.UpdateEndpointLimiter or replace the global limiter instance.
What happens when the limit is exceeded?
The SDK returns a RateLimitError which implements error and provides the RetryAfter duration.
Does the SDK support distributed rate limiting?
Out‑of‑the‑box it is in‑process. For distributed control, integrate a backend store (Redis, etc.) with a custom Limiter implementation.