A high-performance, flexible rate limiting library for TypeScript and Bun.
-
Multiple Rate Limiting Algorithms:
- Fixed Window
- Sliding Window
- Token Bucket
-
Storage Providers:
- In-memory storage (default)
- Redis storage
-
Performance Optimizations:
- Batch operations
- LUA scripting for Redis
- Automatic cleanup of expired records
-
Flexible Configuration:
- Custom key generators
- Skip and handler functions
- Draft mode (record but don't block)
- Standard and legacy headers
bun add ts-rate-limiter
import { RateLimiter } from 'ts-rate-limiter'
// Create a rate limiter with 100 requests per minute
const limiter = new RateLimiter({
windowMs: 60 * 1000, // 1 minute
maxRequests: 100 // limit each IP to 100 requests per windowMs
})
// In your Bun server
const server = Bun.serve({
port: 3000,
async fetch(req) {
// Use as middleware
const limiterResponse = await limiter.middleware()(req)
if (limiterResponse) {
return limiterResponse
}
// Continue with your normal request handling
return new Response('Hello World')
}
})
// Create a rate limiter with more options
const limiter = new RateLimiter({
windowMs: 15 * 60 * 1000, // 15 minutes
maxRequests: 100, // limit each IP to 100 requests per windowMs
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
algorithm: 'sliding-window', // 'fixed-window', 'sliding-window', or 'token-bucket'
// Skip certain requests
skip: (request) => {
return request.url.includes('/api/health')
},
// Custom handler for rate-limited requests
handler: (request, result) => {
return new Response('Too many requests, please try again later.', {
status: 429,
headers: {
'Retry-After': Math.ceil(result.remaining / 1000).toString()
}
})
},
// Custom key generator
keyGenerator: (request) => {
// Use user ID if available, otherwise fall back to IP
const userId = getUserIdFromRequest(request)
return userId || request.headers.get('x-forwarded-for') || '127.0.0.1'
},
// Draft mode - don't actually block requests, just track them
draftMode: process.env.NODE_ENV !== 'production'
})
import { RateLimiter, RedisStorage } from 'ts-rate-limiter'
// Create Redis client
const redis = createRedisClient() // Use your Redis client library
// Create Redis storage
const storage = new RedisStorage({
client: redis,
keyPrefix: 'ratelimit:',
enableSlidingWindow: true
})
// Create rate limiter with Redis storage
const limiter = new RateLimiter({
windowMs: 60 * 1000,
maxRequests: 100,
storage
})
For applications running on multiple servers, use Redis storage to ensure consistent rate limiting:
import { createClient } from 'redis'
import { RateLimiter, RedisStorage } from 'ts-rate-limiter'
// Create and connect Redis client
const redisClient = createClient({
url: 'redis://redis-server:6379'
})
await redisClient.connect()
// Create Redis storage with sliding window for more accuracy
const storage = new RedisStorage({
client: redisClient,
keyPrefix: 'app:ratelimit:',
enableSlidingWindow: true
})
// Create rate limiter with Redis storage
const limiter = new RateLimiter({
windowMs: 60 * 1000,
maxRequests: 100,
storage,
algorithm: 'sliding-window'
})
// Make sure to handle Redis errors
redisClient.on('error', (err) => {
console.error('Redis error:', err)
// You might want to fall back to memory storage in case of Redis failure
})
- Fixed Window: Simplest approach, best for non-critical rate limiting with good performance
- Sliding Window: More accurate, prevents traffic spikes at window boundaries
- Token Bucket: Best for APIs that need to allow occasional bursts of traffic
-
Use memory storage for single-instance applications
-
Use Redis storage for distributed applications
-
Enable automatic cleanup for long-running applications:
const memoryStorage = new MemoryStorage({ enableAutoCleanup: true, cleanupIntervalMs: 60000 // cleanup every minute })
-
Use batch operations for bulk processing:
// Process multiple keys at once const results = await storage.batchIncrement(['key1', 'key2', 'key3'], windowMs)
By default, the rate limiter uses IP addresses. For user-based rate limiting:
const userRateLimiter = new RateLimiter({
windowMs: 60 * 1000,
maxRequests: 100,
keyGenerator: (request) => {
// Extract user ID from auth token, session, etc.
const userId = getUserIdFromRequest(request)
if (!userId) {
throw new Error('User not authenticated')
}
return `user:${userId}`
},
skipFailedRequests: true // Allow unauthenticated requests to bypass rate limiting
})
Performance comparison of different algorithms and storage providers:
Algorithm | Storage | Requests/sec | Latency (avg) |
---|---|---|---|
Fixed Window | Memory | 2,742,597 | 0.000365ms |
Sliding Window | Memory | 10,287 | 0.097203ms |
Token Bucket | Memory | 5,079,977 | 0.000197ms |
Fixed Window | Redis | 10,495 | 0.095277ms |
Sliding Window | Redis | 1,843 | 0.542406ms |
Token Bucket | Redis | 4,194,263 | 0.000238ms |
Benchmarked on Bun v1.2.9, MacBook Pro M3, 100,000 requests per test for Memory, 10,000 requests per test for Redis. All tests performed with Redis running locally.
The traditional rate limiting approach that counts requests in a fixed time window.
More accurate limiting by considering the distribution of requests within the window.
Offers a smoother rate limiting experience by focusing on request rates rather than fixed counts.
bun test
Please see our releases page for more information on what has changed recently.
Please see CONTRIBUTING for details.
For help, discussion about best practices, or any other conversation that would benefit from being searchable:
For casual chit-chat with others using this package:
Join the Stacks Discord Server
"Software that is free, but hopes for a postcard." We love receiving postcards from around the world showing where Stacks is being used! We showcase them on our website too.
Our address: Stacks.js, 12665 Village Ln #2306, Playa Vista, CA 90094, United States π
We would like to extend our thanks to the following sponsors for funding Stacks development. If you are interested in becoming a sponsor, please reach out to us.
The MIT License (MIT). Please see LICENSE for more information.
Made with π