This guide demonstrates how to integrate Cloudflare R2 with Payload CMS for efficient media storage. Cloudflare R2 offers an S3-compatible storage solution with significant cost advantages, particularly its zero egress fees policy.
Why Choose R2 with Payload CMS?
- Cost Efficiency: Zero egress fees, unlike traditional S3 providers
- Global Performance: Leverage Cloudflare's global edge network
- Native Integration: Full S3 compatibility with Payload CMS
- Simplified Management: Easy-to-use dashboard and API
- Predictable Pricing: Pay only for storage used, not for bandwidth
Prerequisites
- Cloudflare account with R2 access enabled
- Payload CMS project (version 3.x or later)
- Node.js 16.x or later
- Package manager (npm, yarn, or pnpm)
Implementation Guide
1. Install Required Dependencies
1pnpm add @payloadcms/storage-s3 @aws-sdk/client-s3
2. Configure R2 in Cloudflare
- Navigate to Cloudflare Dashboard > R2
- Create a new bucket:
- Click "Create bucket"
- Enter a unique bucket name
- Choose your preferred region
- Generate API credentials:
- Go to "R2 > Manage R2 API Tokens"
- Click "Create API Token"
- Select permissions:
- "Object Read" for read-only access
- "Object Write" for upload capabilities
- Both for full access
- Save your credentials securely
3. Set Up Environment Variables
Create or update your .env file:
1# .env2R2_ACCESS_KEY_ID=your_access_key_id3R2_SECRET_ACCESS_KEY=your_secret_access_key4R2_BUCKET=your_bucket_name5R2_ENDPOINT=your-account-id.r2.cloudflarestorage.com
Important: The R2_ENDPOINT should be just the domain without the https:// protocol. The protocol will be added in the configuration code.
4. Configure Payload CMS
Update your payload.config.ts:
1import { buildConfig } from "payload/config";2import { s3Storage } from "@payloadcms/storage-s3";3import path from "path";45const storage = s3Storage({6 collections: {7 media: {8 disableLocalStorage: true, // Recommended for production9 prefix: "media", // Optional prefix for uploaded files10 generateFileURL: ({ filename, prefix }) =>11 `https://${process.env.R2_BUCKET}.${process.env.R2_ENDPOINT}/${prefix}/${filename}`,12 },13 },14 bucket: process.env.R2_BUCKET || "",15 config: {16 endpoint: `https://${process.env.R2_ENDPOINT}` || "", // Protocol is required here17 credentials: {18 accessKeyId: process.env.R2_ACCESS_KEY_ID || "",19 secretAccessKey: process.env.R2_SECRET_ACCESS_KEY || "",20 },21 region: "auto", // Required for R222 forcePathStyle: true, // Required for R223 },24});2526export default buildConfig({27 admin: {28 // Your admin config29 },30 collections: [31 {32 slug: "media",33 upload: {34 staticDir: path.resolve(__dirname, "../media"),35 // Image sizes are processed locally before upload36 imageSizes: [37 {38 name: "thumbnail",39 width: 400,40 height: 300,41 position: "centre",42 },43 {44 name: "card",45 width: 768,46 height: 1024,47 position: "centre",48 },49 ],50 formatOptions: {51 format: "webp", // Convert uploads to WebP52 options: {53 quality: 80,54 },55 },56 },57 fields: [], // Add custom fields as needed58 },59 ],60 plugins: [storage],61});
Critical: Note that the
endpointin the config requires thehttps://protocol prefix. The environment variable contains only the domain, and we add the protocol in the configuration. This is essential for the S3 client to work properly.
5. Public Access Configuration
To make your R2 bucket publicly accessible, you have two options:
- Using Cloudflare Workers (Recommended):
1// worker.js2export default {3 async fetch(request, env) {4 const url = new URL(request.url);5 const objectKey = url.pathname.slice(1); // Remove leading slash67 try {8 const object = await env.MY_BUCKET.get(objectKey);910 if (!object) {11 return new Response("Object Not Found", { status: 404 });12 }1314 const headers = new Headers();15 object.writeHttpMetadata(headers);16 headers.set("etag", object.httpEtag);1718 return new Response(object.body, {19 headers,20 });21 } catch (error) {22 return new Response("Internal Error", { status: 500 });23 }24 },25};
- Using Public Bucket (Simpler but less control):
- Enable public access in R2 bucket settings
- Update your generateFileURL config to use the public bucket URL
Advanced Configuration
Custom Upload Handlers
1s3Storage({2 // ...other config3 beforeUpload: async ({ req, file }) => {4 // Customize file before upload5 return file;6 },7 afterUpload: async ({ req, file, collection }) => {8 // Perform actions after successful upload9 console.log(`File ${file.filename} uploaded to ${collection.slug}`);10 },11});
Error Handling
Implement robust error handling:
1try {2 await payload.create({3 collection: "media",4 data: {5 // your upload data6 },7 });8} catch (error) {9 if (error.code === "AccessDenied") {10 console.error("R2 access denied - check credentials");11 } else if (error.code === "NoSuchBucket") {12 console.error("R2 bucket not found");13 } else {14 console.error("Upload failed:", error);15 }16}
Troubleshooting Guide
Common Issues
Invalid URL Error
If you encounter TypeError: Invalid URL errors:
- Ensure the
endpointin your config includes thehttps://protocol - The environment variable should contain ONLY the domain (e.g.,
account-id.r2.cloudflarestorage.com) - The protocol is added in the configuration:
endpoint: `https://${process.env.R2_ENDPOINT}`
Connection Errors
- Verify R2 credentials are correct
- Confirm endpoint URL format matches the pattern above
- Check network connectivity
- Validate IP allowlist settings
Upload Failures
- Verify bucket exists and is accessible
- Check API token permissions (needs both Read & Write)
- Confirm file size limits
- Validate MIME type restrictions
URL Generation Issues
- Verify public bucket configuration
- Check custom domain settings
- Validate URL formatting
Health Check
Run this diagnostic code to verify your setup:
1async function checkR2Setup() {2 try {3 const testUpload = await payload.create({4 collection: "media",5 data: {6 // test file data7 },8 });9 console.log("R2 connection successful:", testUpload);10 } catch (error) {11 console.error("R2 setup check failed:", error);12 }13}
Performance Optimization
File Compression
- Implement image compression before upload
- Use appropriate file formats
- Set reasonable size limits
Caching Strategy
- Configure browser caching headers
- Implement cache-control policies
- Use Cloudflare's caching features
Security Best Practices
Access Control
- Use least-privilege access tokens
- Implement bucket policies
- Enable audit logging
Data Protection
- Enable encryption at rest
- Implement secure file validation
- Regular security audits
Resources
- Payload CMS Documentation
- Cloudflare R2 Documentation
- S3 Plugin Documentation
- Cloudflare Workers Documentation
Support
For additional help:
- Join the Payload CMS Discord
- Visit the Cloudflare Community Forums
- Submit issues on GitHub