Overview
This guide covers advanced rate‑limiting techniques for the go-sdk
. It demonstrates how to configure token‑bucket and leaky‑bucket algorithms, apply per‑endpoint limits, and combine limits with dynamic back‑off.
Key Concepts
- Token Bucket – Allows bursts up to a defined capacity while sustaining an average rate.
- Leaky Bucket – Guarantees a constant output rate, smoothing bursts.
- Hierarchical Limits – Combine global and per‑method limits.
- Dynamic Adjustment – Modify limits at runtime based on load or external signals.
Installation
go get github.com/example/go-sdk
Basic Example (Token Bucket)
package main
import (
"context"
"log"
"time"
sdk "github.com/example/go-sdk"
"github.com/example/go-sdk/ratelimit"
)
func main() {
// Create a client with a token bucket limiter: 10 req/s, burst up to 20
limiter := ratelimit.NewTokenBucket(10, 20)
client := sdk.NewClient(
sdk.WithRateLimiter(limiter),
)
ctx := context.Background()
for i := 0; i < 30; i++ {
go func(i int) {
resp, err := client.Do(ctx, sdk.Request{
Method: "GET",
URL: "https://api.example.com/v1/resource",
})
if err != nil {
log.Printf("req %d error: %v", i, err)
return
}
log.Printf("req %d status: %s", i, resp.Status)
}(i)
}
// Let the goroutines finish
time.Sleep(5 * time.Second)
}
Advanced Example (Hierarchical & Dynamic)
package main
import (
"context"
"log"
"sync/atomic"
"time"
sdk "github.com/example/go-sdk"
"github.com/example/go-sdk/ratelimit"
)
func main() {
// Global limiter: 100 req/min
globalLimiter := ratelimit.NewTokenBucket(1.66, 5) // ≈100/min with small burst
// Per‑endpoint limiter for /v1/heavy: 5 req/s
heavyLimiter := ratelimit.NewLeakyBucket(5, 5)
// Combine both using a composite limiter
composite := ratelimit.NewCompositeLimiter(globalLimiter, map[string]ratelimit.Limiter{
"/v1/heavy": heavyLimiter,
})
client := sdk.NewClient(sdk.WithRateLimiter(composite))
// Dynamic adjustment: after 30 seconds, raise the heavy endpoint limit
go func() {
time.Sleep(30 * time.Second)
newLimiter := ratelimit.NewLeakyBucket(10, 10)
composite.UpdateEndpointLimiter("/v1/heavy", newLimiter)
log.Println("Heavy endpoint limit increased to 10 req/s")
}()
ctx := context.Background()
var cnt int64
for i := 0; i < 200; i++ {
go func(i int) {
path := "/v1/normal"
if i%10 == 0 {
path = "/v1/heavy"
}
resp, err := client.Do(ctx, sdk.Request{
Method: "GET",
URL: "https://api.example.com" + path,
})
if err != nil {
log.Printf("req %d error: %v", i, err)
return
}
atomic.AddInt64(&cnt, 1)
log.Printf("req %d (%s) → %s (total: %d)", i, path, resp.Status, cnt)
}(i)
}
// Keep process alive for demo
select {}
}
Live Playground
Best Practices
- Start with a modest burst size; increase only after measuring traffic patterns.
- Apply stricter limits to high‑cost endpoints (e.g., analytics, bulk operations).
- Use
CompositeLimiter
to separate global and per‑endpoint concerns. - Monitor
RateLimitExceeded
metrics to adjust limits proactively. - Prefer leaky‑bucket for background jobs to guarantee steady processing.
FAQ
- Can I change limits without restarting the client?
- Yes. Use
CompositeLimiter.UpdateEndpointLimiter
or replace the global limiter instance. - What happens when the limit is exceeded?
- The SDK returns a
RateLimitError
which implementserror
and provides theRetryAfter
duration. - Does the SDK support distributed rate limiting?
- Out‑of‑the‑box it is in‑process. For distributed control, integrate a backend store (Redis, etc.) with a custom
Limiter
implementation.