This guide demonstrates how to integrate Cloudflare R2 with Payload CMS for efficient media storage. Cloudflare R2 offers an S3-compatible storage solution with significant cost advantages, particularly its zero egress fees policy.
Why Choose R2 with Payload CMS?
- Cost Efficiency: Zero egress fees, unlike traditional S3 providers
- Global Performance: Leverage Cloudflare's global edge network
- Native Integration: Full S3 compatibility with Payload CMS
- Simplified Management: Easy-to-use dashboard and API
- Predictable Pricing: Pay only for storage used, not for bandwidth
Prerequisites
- Cloudflare account with R2 access enabled
- Payload CMS project (version 3.x or later)
- Node.js 18.x or later
- Package manager (npm, yarn, or pnpm)
Implementation Guide
1. Install Required Dependencies
pnpm add @payloadcms/storage-s3 @aws-sdk/client-s32. Configure R2 in Cloudflare
- Navigate to Cloudflare Dashboard > R2
- Create a new bucket:
- Click "Create bucket"
- Enter a unique bucket name
- Choose your preferred region
- Generate API credentials:
- Go to "R2 > Manage R2 API Tokens"
- Click "Create API Token"
- Select permissions:
- "Object Read" for read-only access
- "Object Write" for upload capabilities
- Both for full access
- Save your credentials securely
3. Set Up Environment Variables
Create or update your .env file:
# .envR2_ACCESS_KEY_ID=your_access_key_idR2_SECRET_ACCESS_KEY=your_secret_access_keyR2_BUCKET=your_bucket_nameR2_ENDPOINT=your-account-id.r2.cloudflarestorage.comImportant: The R2_ENDPOINT should be just the domain without the https:// protocol. The protocol will be added in the configuration code.
4. Configure Payload CMS
Update your payload.config.ts:
import { buildConfig } from "payload";import { s3Storage } from "@payloadcms/storage-s3";import path from "path";
const storage = s3Storage({ collections: { media: { disableLocalStorage: true, // Recommended for production prefix: "media", // Optional prefix for uploaded files generateFileURL: ({ filename, prefix }) => `https://${process.env.R2_BUCKET}.${process.env.R2_ENDPOINT}/${prefix}/${filename}`, }, }, bucket: process.env.R2_BUCKET || "", config: { endpoint: `https://${process.env.R2_ENDPOINT}` || "", // Protocol is required here credentials: { accessKeyId: process.env.R2_ACCESS_KEY_ID || "", secretAccessKey: process.env.R2_SECRET_ACCESS_KEY || "", }, region: "auto", // Required for R2 forcePathStyle: true, // Required for R2 },});
export default buildConfig({ admin: { // Your admin config }, collections: [ { slug: "media", upload: { staticDir: path.resolve(__dirname, "../media"), // Image sizes are processed locally before upload imageSizes: [ { name: "thumbnail", width: 400, height: 300, position: "centre", }, { name: "card", width: 768, height: 1024, position: "centre", }, ], formatOptions: { format: "webp", // Convert uploads to WebP options: { quality: 80, }, }, }, fields: [], // Add custom fields as needed }, ], plugins: [storage],});Critical: Note that the
endpointin the config requires thehttps://protocol prefix. The environment variable contains only the domain, and we add the protocol in the configuration. This is essential for the S3 client to work properly.
5. Public Access Configuration
To make your R2 bucket publicly accessible, you have two options:
- Using Cloudflare Workers (Recommended):
// worker.jsexport default { async fetch(request, env) { const url = new URL(request.url); const objectKey = url.pathname.slice(1); // Remove leading slash
try { const object = await env.MY_BUCKET.get(objectKey);
if (!object) { return new Response("Object Not Found", { status: 404 }); }
const headers = new Headers(); object.writeHttpMetadata(headers); headers.set("etag", object.httpEtag);
return new Response(object.body, { headers, }); } catch (error) { return new Response("Internal Error", { status: 500 }); } },};- Using Public Bucket (Simpler but less control):
- Enable public access in R2 bucket settings
- Update your generateFileURL config to use the public bucket URL
Advanced Configuration
Custom Upload Handlers
Use Payload's collection-level hooks to customize upload behavior:
// In your media collection definition{ slug: "media", hooks: { beforeOperation: [ async ({ operation, req }) => { if (operation === "create" && req.file) { // Customize filename or perform validation before upload console.log(`Uploading file: ${req.file.name}`); } }, ], afterChange: [ async ({ doc, operation }) => { if (operation === "create") { // Perform actions after successful upload console.log(`File ${doc.filename} uploaded successfully`); } }, ], }, upload: { // upload config... }, fields: [],}Error Handling
Implement robust error handling:
try { await payload.create({ collection: "media", data: { // your upload data }, });} catch (error) { if (error.code === "AccessDenied") { console.error("R2 access denied - check credentials"); } else if (error.code === "NoSuchBucket") { console.error("R2 bucket not found"); } else { console.error("Upload failed:", error); }}Troubleshooting Guide
Common Issues
Invalid URL Error
If you encounter TypeError: Invalid URL errors:
- Ensure the
endpointin your config includes thehttps://protocol - The environment variable should contain ONLY the domain (e.g.,
account-id.r2.cloudflarestorage.com) - The protocol is added in the configuration:
endpoint: `https://${process.env.R2_ENDPOINT}`
Connection Errors
- Verify R2 credentials are correct
- Confirm endpoint URL format matches the pattern above
- Check network connectivity
- Validate IP allowlist settings
Upload Failures
- Verify bucket exists and is accessible
- Check API token permissions (needs both Read & Write)
- Confirm file size limits
- Validate MIME type restrictions
URL Generation Issues
- Verify public bucket configuration
- Check custom domain settings
- Validate URL formatting
Health Check
Run this diagnostic code to verify your setup:
async function checkR2Setup() { try { const testUpload = await payload.create({ collection: "media", data: { // test file data }, }); console.log("R2 connection successful:", testUpload); } catch (error) { console.error("R2 setup check failed:", error); }}Performance Optimization
File Compression
- Implement image compression before upload
- Use appropriate file formats
- Set reasonable size limits
Caching Strategy
- Configure browser caching headers
- Implement cache-control policies
- Use Cloudflare's caching features
Security Best Practices
Access Control
- Use least-privilege access tokens
- Implement bucket policies
- Enable audit logging
Data Protection
- Enable encryption at rest
- Implement secure file validation
- Regular security audits
Resources
- Payload CMS Documentation
- Cloudflare R2 Documentation
- S3 Plugin Documentation
- Cloudflare Workers Documentation
Support
For additional help:
- Join the Payload CMS Discord
- Visit the Cloudflare Community Forums
- Submit issues on GitHub