# llms-full.txt: cached-middleware-fetch-next ## Overview The `cached-middleware-fetch-next` library is a Next.js fetch wrapper specifically designed for edge middleware that uses **Vercel Runtime Cache** (https://vercel.com/changelog/introducing-the-runtime-cache-api) as its caching backend. This package provides a drop-in replacement for the native fetch API that mimics Next.js's Data Cache behavior in edge middleware environments where the standard Data Cache is not available. ----- ## Why This Library Exists This library solves a specific limitation in Next.js when hosting on Vercel: the built-in **Data Cache** that normally works with `fetch()` is not available in edge middleware. The package bridges this gap by providing a fetch wrapper that uses Vercel's Runtime Cache (https://vercel.com/changelog/introducing-the-runtime-cache-api) as its backend, allowing you to cache fetch requests in middleware with the same familiar API as Next.js's extended fetch. ----- ## Installation Install via npm, yarn, or pnpm: \`\`\`bash npm install cached-middleware-fetch-next # or yarn add cached-middleware-fetch-next # or pnpm add cached-middleware-fetch-next \`\`\` ----- ## Core Features ### 1\. Drop-in Fetch Replacement The library provides `cachedFetch` which can be used exactly like the native fetch API: \`\`\`typescript import { cachedFetch } from 'cached-middleware-fetch-next'; export async function middleware(request: NextRequest) { // This will be cached using Vercel Runtime Cache const response = await cachedFetch('https://api.example.com/data'); const data = await response.json(); // Use the data in your middleware logic return NextResponse.next(); } \`\`\` ----- ### 2\. SWR (Stale-While-Revalidate) Caching Strategy The package implements sophisticated **SWR caching behavior**: 1. **Immediate Response**: Always returns cached data immediately if available (even if stale) 2. **Background Refresh**: If data is stale (past `revalidate` time) but not expired, triggers a background refresh 3. **Non-blocking**: The user gets the stale data immediately while fresh data is fetched in the background Example usage: \`\`\`typescript // Example: Product data that updates hourly but can be stale for a day const response = await cachedFetch('https://api.example.com/products', { next: { revalidate: 3600, // Consider stale after 1 hour expires: 86400 // But keep serving stale data for up to 24 hours } }); // Users get instant responses, even with stale data // Fresh data is fetched in the background when needed \`\`\` ----- ### 3\. Cache Status Headers Every response includes detailed cache status information: \`\`\`typescript const response = await cachedFetch('https://api.example.com/data', { next: { revalidate: 300 } }); // Check cache status const cacheStatus = response.headers.get('X-Cache-Status'); // 'HIT' | 'MISS' | 'STALE' const cacheAge = response.headers.get('X-Cache-Age'); // Age in seconds const expiresIn = response.headers.get('X-Cache-Expires-In'); // Time until expiry (if applicable) console.log(`Cache ${cacheStatus}: ${cacheAge}s old, expires in ${expiresIn}s`); \`\`\` **Cache Status Values:** - **`HIT`**: Fresh cached data served instantly - **`STALE`**: Cached data served instantly, background refresh triggered - **`MISS`**: No cached data available, fetched from origin ----- ### 4\. GraphQL Support The package fully supports caching GraphQL queries sent via POST requests. Each unique query (based on the request body) gets its own cache entry: \`\`\`typescript // Example: Caching GraphQL queries const response = await cachedFetch('https://api.example.com/graphql', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ query: ` query GetProducts($category: String!) { products(category: $category) { id name price } } `, variables: { category: 'electronics' } }), next: { revalidate: 3600, // Cache for 1 hour tags: ['products', 'electronics'] } }); const data = await response.json(); // Different queries or variables will have different cache keys // So this query will be cached separately: const response2 = await cachedFetch('https://api.example.com/graphql', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ query: `query GetProducts($category: String!) { ... }`, variables: { category: 'clothing' } // Different variable = different cache key }), next: { revalidate: 3600 } }); \`\`\` ----- ## Implementation Details ### Cache Key Generation The library generates cache keys exactly matching Next.js's behavior: - Uses "v3" version prefix for compatibility - Creates **SHA-256 hash** of request components - Includes URL, method, headers, body, and all request options - Automatically removes 'traceparent' and 'tracestate' headers to prevent cache fragmentation - Supports custom cache key prefixes via `next.fetchCacheKeyPrefix` ### Body Processing for Cache Keys The library handles various body types for proper cache key generation: - **Uint8Array**: decoded to string, original preserved - **ReadableStream**: consumed and reconstructed - **FormData/URLSearchParams**: serialized as key-value pairs - **Blob**: converted to text and preserved - **String**: used directly ### Header Processing Critical for cache key stability, the library removes trace context headers ('traceparent' and 'tracestate') to prevent cache fragmentation in distributed tracing scenarios. ----- ## API Reference ### Main Function: `cachedFetch(input, init?)` #### Parameters - `input`: `RequestInfo | URL` - The resource to fetch - `init?`: `CachedFetchOptions` - Extended fetch options #### Returns A `Promise` that resolves to a Response object with additional cache status headers: - `X-Cache-Status`: `'HIT' | 'MISS' | 'STALE'` - Cache status - `X-Cache-Age`: `string` - Age of cached data in seconds (0 for fresh/miss) - `X-Cache-Expires-In`: `string` - Time until cache expires in seconds (if applicable) ### CachedFetchOptions Interface The options interface extends standard `RequestInit` with: \`\`\`typescript interface CachedFetchOptions extends Omit { cache?: 'auto no cache' | 'no-store' | 'force-cache'; next?: { revalidate?: false | 0 | number; expires?: number; // absolute expiry in seconds (must be > revalidate) tags?: string[]; fetchCacheKeyPrefix?: string; }; } \`\`\` #### Cache Options - `'force-cache'`: Look for a match in the cache first, fetch if not found or stale - `'no-store'`: Always fetch from the remote server, bypass cache - `'auto no cache'` (default): Intelligent caching based on context #### Revalidation Options - `revalidate`: - `false`: Never revalidate (cache indefinitely) - `0`: Prevent caching (same as `cache: 'no-store'`) - `number`: Time in seconds before data is considered stale - `expires`: - `number`: Absolute expiry time in seconds (must be greater than `revalidate`) - If not specified, defaults to 24 hours or 10x the revalidate time, whichever is larger - `tags`: - `string[]`: Cache tags for manual invalidation - **Note**: Automatic tag-based revalidation is not supported - Tags can be used with Vercel's cache APIs for manual clearing ### Cache Entry Structure The internal cache entry structure includes: - `data`: Response body data - `headers`: Response headers as key-value pairs - `status`/`statusText`: HTTP response status - `timestamp`: When the entry was cached - `revalidateAfter`: When revalidation should occur - `expiresAt`: When the cache entry expires - `tags`: Associated cache tags - `isBinary`/`contentType`: Binary data handling metadata ----- ## Advanced Usage Patterns ### Multi-tenant Caching \`\`\`typescript // Use custom cache key prefix for multi-tenant scenarios const response6 = await cachedFetch('https://api.example.com/data', { next: { revalidate: 300, fetchCacheKeyPrefix: `tenant-${tenantId}` } }); \`\`\` ### Route Resolution in Middleware \`\`\`typescript import { NextRequest, NextResponse } from 'next/server'; import { cachedFetch } from 'cached-middleware-fetch-next'; export async function middleware(request: NextRequest) { const pathname = request.nextUrl.pathname; // Cache route resolution for 30 minutes const routeResponse = await cachedFetch( `https://api.example.com/routes?path=${pathname}`, { next: { revalidate: 1800, // 30 minutes tags: ['routes'] } } ); const route = await routeResponse.json(); if (route.redirect) { return NextResponse.redirect(new URL(route.redirect, request.url)); } if (route.rewrite) { return NextResponse.rewrite(new URL(route.rewrite, request.url)); } return NextResponse.next(); } \`\`\` ----- ## Technical Requirements ### Dependencies - Next.js 14.0.0 or later - @vercel/functions 2.2.13 or later - Node.js 18.0.0 or later - Deployed on Vercel (Runtime Cache (https://vercel.com/changelog/introducing-the-runtime-cache-api) is a Vercel feature) ### Environment Behavior - **On Vercel Edge**: Uses Runtime Cache (https://vercel.com/changelog/introducing-the-runtime-cache-api) for persistence and SWR background refresh via `waitUntil()` - **Outside Vercel**: Falls back to native `fetch` behavior without caching (local dev or Node runtimes without `@vercel/functions`) ### Edge Runtime Compatibility This package is designed specifically for the **Edge Runtime** and works in Next.js Middleware using either Edge or Node.js runtime. ----- ## Limitations - Only caches successful responses (2xx status codes) - Only caches GET, POST, and PUT requests - Cache tags are stored but on-demand revalidation is not yet implemented - Runtime Cache (https://vercel.com/changelog/introducing-the-runtime-cache-api) has size limits (check Vercel documentation) - The `getCache` function from `@vercel/functions` is only available at runtime on Vercel's infrastructure ----- ## Internal Implementation Notes ### Background Refresh Mechanism The SWR implementation uses Vercel's **`waitUntil()`** function to extend request lifetime for background refresh operations. When stale data is detected, it returns the cached data immediately while triggering a background fetch to update the cache. ### Binary Data Handling The library includes sophisticated binary data handling with Base64 encoding/decoding for cross-runtime compatibility, supporting both **Buffer** (Node.js) and **btoa/atob** (Edge runtime) APIs. ### Graceful Degradation The implementation includes comprehensive error handling that gracefully falls back to regular fetch if cache operations fail, ensuring reliability in production environments. This library provides a robust, production-ready solution for caching fetch requests in Next.js middleware on Vercel, with advanced features like SWR caching, GraphQL support, and comprehensive cache monitoring.