요청 간 LRU 캐싱
Overview
React.cache() only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.
Impact
- Priority: HIGH
- Performance: caches across requests
Implementation Example
typescript
import { LRUCache } from "lru-cache"
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 5 * 60 * 1000, // 5 minutes
})
export async function getUser(id: string) {
const cached = cache.get(id)
if (cached) return cached
const user = await db.user.findUnique({ where: { id } })
cache.set(id, user)
return user
}
// Request 1: DB query, result cached
// Request 2: cache hit, no DB queryKey Points
- Cross-request sharing: Cache shared across multiple requests
- TTL configuration: Set appropriate expiration time
- Memory management: Limit memory usage with max option
- Sequential actions: Effective for user's consecutive actions
Platform Considerations
Vercel Fluid Compute
LRU caching is especially effective. Multiple concurrent requests can share the same function instance and cache, so cache persists across requests without Redis.
Traditional Serverless
Each invocation runs in isolation, so consider Redis for cross-process caching.
Use Cases
- User's sequential actions
- Re-requesting same data within short period
- API request optimization
- Database load reduction
Reference
Library: node-lru-cache
Tags: server, cache, lru, cross-request