Vercel Blob
Vercel Blob is a scalable, and cost-effective object storage service for static assets, such as images, videos, audio files, and more.Vercel Blob is available on all plans
Those with the owner, member, developer role can access this feature
Vercel Blob is a great solution for storing blobs that need to be frequently read. Here are some examples suitable for Vercel Blob:
- Files that are programmatically uploaded or generated at build time, for display and download such as avatars, screenshots, cover images and videos
- Large files such as videos and audios to take advantage of the global network
- Files that you would normally store in an external file storage solution like Amazon S3. With your project hosted on Vercel, you can readily access and manage these files with Vercel Blob
You can create and manage your Vercel Blob stores from your account dashboard. You can scope your Vercel Blob stores to your Hobby team or team, and connect them to as many projects as you want.
To get started, see the server-side, or client-side quickstart guides.
If you'd like to know whether or not Vercel Blob can be integrated into your workflow, it's worth knowing the following:
- You can have one or more Vercel Blob stores per Vercel account
- You can use multiple Vercel Blob stores in one Vercel project
- Each Vercel Blob store can be accessed by multiple Vercel projects
- Vercel Blob URLs are publicly accessible, created with an unguessable random id, and immutable
- To add to or remove from the content of a Blob store, a valid token is required
There are two ways to upload files to Vercel Blob:
- Server uploads: This is the most common way to upload files. The file is first sent to your server and then to Vercel Blob. It's straightforward to implement, but you are limited to the request body your server can handle. Which in case of a Vercel-hosted website is 4.5 MB. This means you can't upload files larger than 4.5 MB on Vercel when using this method.
- Client uploads: This is a more advanced solution for when you need to upload larger files. The file is securely sent directly from the client (a browser for example) to Vercel Blob. This requires a bit more work to implement, but it allows you to upload files up to 5 TB (5,000 GB).
You can also upload files larger than 4.5 MB directly from a script or server code, as long as the file isn't received from a Vercel-hosted website. An example of that would be a server-side fetch()
request streaming the response to Vercel Blob.
Vercel Blob URLs, although publicly accessible, are unique and hard to guess. They are composed of a unique store id, a pathname and a unique random blob id generated when the blob is created.
This is similar to Share a file publicly in Google Docs. You should ensure that the URLs are only shared to authorized users
Headers that enhance security by preventing unauthorized downloads, blocking external content from being embedded, and protecting against malicious file type manipulation, are enforced on each blob. They are:
content-security-policy
:default-src "none"
x-frame-options
:DENY
x-content-type-options
:nosniff
content-disposition
:attachment/inline; filename="filename.extension"
All files stored on Vercel Blob are secured using AES-256 encryption. This encryption process is applied at rest and is transparent, ensuring that files are encrypted before being saved to the disk and decrypted upon retrieval.
Each Blob is served with a content-disposition
header. Based on the MIME type of the uploaded file, it is either set to attachment
(force file download) or inline
(can render in a browser tab).
This is done to prevent hosting specific files on @vercel/blob
like HTML web pages. Your browser will automatically download the file instead of displaying it for these cases.
Currently text/plain
, text/xml
, application/json
, application/pdf
, image/*
, audio/*
and video/*
resolve to a content-disposition: inline
header.
All other MIME types default to content-disposition: attachment
.
If you need a blob URL that always forces a download you can use the downloadUrl
property on the blob object. This URL always has the content-disposition: attachment
header no matter its MIME type.
import { list } from '@vercel/blob';
export default async function Page() {
const response = await list();
return (
<>
{response.blobs.map((blob) => (
<a key={blob.pathname} href={blob.downloadUrl}>
{blob.pathname}
</a>
))}
</>
);
}
Alternatively the SDK exposes a helper function called getDownloadUrl
that returns the same URL.
When you request a blob URL using a browser, the content is cached in two places:
- Your browser's cache
- Vercel's edge cache
Both caches store blobs for up to 1 month by default to ensure optimal performance when serving content. While both systems aim to respect this duration, blobs may occasionally expire earlier.
You can customize the caching duration using the cacheControlMaxAge
option in the put()
and handleUpload
methods.
The minimum configurable value is 60 seconds (1 minute). This represents the maximum time needed for our cache to update content behind a blob URL. For applications requiring faster updates, consider using a Vercel function instead.
When you delete or update (overwrite) a blob, the changes may take up to 60 seconds to propagate through our edge cache. However, browser caching presents additional challenges:
- While our edge cache can update to serve the latest content, browsers will continue serving the cached version
- To force browsers to fetch the updated content, add a unique query parameter to the blob URL:
<img
src="https://1sxstfwepd7zn41q.public.blob.vercel-storage.com/blob-oYnXSVczoLa9yBYMFJOSNdaiiervF5.png?v=123456"
/>
For more information about updating existing blobs, see the Overwriting Blobs section.
For optimal performance and to avoid caching issues, consider treating blobs as immutable objects:
- Instead of updating existing blobs, create new ones with different pathnames (or use
addRandomSuffix: true
option) - This approach avoids unexpected behaviors like outdated content appearing in your application
There are still valid use cases for mutable blobs with shorter cache durations, such as a single JSON file that's updated every 5 minutes with a top list of sales or other regularly refreshed data. For these scenarios, set an appropriate cacheControlMaxAge
value and be mindful of caching behaviors.
By default, Vercel Blob prevents you from accidentally overwriting existing blobs by using the same pathname twice. When you attempt to upload a blob with a pathname that already exists, the operation will throw an error.
To explicitly allow overwriting existing blobs, you can use the allowOverwrite
option:
const blob = await put('user-profile.jpg', imageFile, {
access: 'public',
allowOverwrite: true, // Enable overwriting an existing blob with the same pathname
});
This option is available in these methods:
put()
- In client uploads via the
onBeforeGenerateToken()
function
Overwriting blobs can be appropriate for certain use cases:
- Regularly updated files: For files that need to maintain the same URL but contain updated content (like JSON data files or configuration files)
- Content with predictable update patterns: For data that changes on a schedule and where consumers expect updates at the same URL
When overwriting blobs, be aware that due to caching, changes won't be immediately visible. The minimum time for changes to propagate is 60 seconds, and browser caches may need to be explicitly refreshed.
If you want to avoid overwriting existing content (recommended for most use cases), you have two options:
- Use
addRandomSuffix: true
: This automatically adds a unique random suffix to your pathnames:
const blob = await put('avatar.jpg', imageFile, {
access: 'public',
addRandomSuffix: true, // Creates a pathname like 'avatar-oYnXSVczoLa9yBYMFJOSNdaiiervF5.jpg'
});
- Generate unique pathnames programmatically: Create unique pathnames by adding timestamps, UUIDs, or other identifiers:
const timestamp = Date.now();
const blob = await put(`user-profile-${timestamp}.jpg`, imageFile, {
access: 'public',
});
Currently, Vercel Blob physically stores all data in a single Vercel region: iad1
(us-east-1
) in the United States. While this setup ensures high performance and reliability for most, it may not meet the data residency requirements of some customers, particularly those in EMEA or APAC regions with strict data sovereignty regulations.
If your application requires storing data in a specific region or country, Vercel Blob may not be suitable at this time. Future updates will include support for additional storage regions.
Vercel Blob leverages Amazon S3 as its underlying storage infrastructure, providing industry-leading durability and availability:
- Durability: Vercel Blob offers 99.999999999% (11 nines) durability. This means that even with one billion objects, you could expect to go a hundred years without losing a single one.
- Availability: Vercel Blob provides 99.99% (4 nines) availability in a given year, ensuring that your data is accessible when you need it.
These guarantees are backed by S3's robust architecture, which includes automatic replication and error correction mechanisms.
Vercel Blob has folders support to organize your files:
const blob = await put('folder/file.txt', 'Hello World!', { access: 'public' });
The path folder/file.txt
creates a folder named folder
and a blob file named file.txt
. To list all blobs within a folder, use the list
function:
const listOfBlobs = await list({
cursor,
limit: 1000,
prefix: 'folder/',
});
You don't need to create folders. Upload a file with a path containing a slash /
, and Vercel Blob will interpret the slashes as folder delimiters.
In the Vercel Blob file browser on the Vercel dashboard, any pathname with a slash /
is treated as a folder. However, these are not actual folders like in a traditional file system; they are used for organizing blobs in listings and the file browser.
Vercel Blob supports range requests for partial downloads. This means you can download only a portion of a blob, here are examples:
curl https://1sxstfwepd7zn41q.public.blob.vercel-storage.com/range-requests.txt
# 0123456789
# First 5 bytes
curl -r 0-4 https://1sxstfwepd7zn41q.public.blob.vercel-storage.com/range-requests.txt
# 01234
# Last 5 bytes
curl -r -5 https://1sxstfwepd7zn41q.public.blob.vercel-storage.com/range-requests.txt
# 56789
# Bytes 3-6
curl -r 3-6 https://1sxstfwepd7zn41q.public.blob.vercel-storage.com/range-requests.txt
# 3456
You can track the upload progress when uploading blobs with the onUploadProgress
callback:
const blob = await upload('big-file.mp4', file, {
access: 'public',
handleUploadUrl: '/api/upload',
onUploadProgress: (progressEvent) => {
console.log(`Loaded ${progressEvent.loaded} bytes`);
console.log(`Total ${progressEvent.total} bytes`);
console.log(`Percentage ${progressEvent.percentage}%`);
},
});
onUploadProgress
is available on put
and upload
methods.
Every Vercel Blob operation can be canceled, just like a fetch call. This is useful when you want to abort an ongoing operation, for example, when a user navigates away from a page or when the request takes too long.
const abortController = new AbortController();
try {
const blobPromise = vercelBlob.put('hello.txt', 'Hello World!', {
access: 'public',
abortSignal: abortController.signal,
});
const timeout = setTimeout(() => {
// Abort the request after 1 second
abortController.abort();
}, 1000);
const blob = await blobPromise;
console.info('blob put request completed', blob);
clearTimeout(timeout);
return blob.url;
} catch (error) {
if (error instanceof vercelBlob.BlobRequestAbortedError) {
// Handle the abort
console.info('canceled put request');
}
// Handle other errors
}
If, for some reason, you want to delete all the blobs in your store, do this:
import { list, del } from '@vercel/blob';
async function deleteAllBlobs() {
let cursor;
do {
const listResult = await list({
cursor,
limit: 1000,
});
if (listResult.blobs.length > 0) {
await del(listResult.blobs.map((blob) => blob.url));
}
cursor = listResult.cursor;
} while (cursor);
console.log('All blobs were deleted');
}
deleteAllBlobs().catch((error) => {
console.error('An error occurred:', error);
});
While there's no native backup system for Vercel Blob, here are two ways to backup your blobs:
- Continuous backup: When using Client Uploads you can leverage the
onUploadCompleted
callback from thehandleUpload
server-side function to save every Blob upload to another storage. - Periodic backup: Using Cron Jobs and the Vercel Blob SDK you can periodically list all blobs and save them.
Here's an example implementation of a periodic backup as a Cron Job:
import { Readable } from "node:stream";
import { S3Client } from "@aws-sdk/client-s3";
import { list } from "@vercel/blob";
import { Upload } from "@aws-sdk/lib-storage";
import type { NextRequest } from "next/server";
import type { ReadableStream } from "node:stream/web";
export async function GET(request: NextRequest) {
const authHeader = request.headers.get("authorization");
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return new Response("Unauthorized", {
status: 401,
});
}
const s3 = new S3Client({
region: "us-east-1",
});
let cursor: string | undefined;
do {
const listResult = await list({
cursor,
limit: 250,
});
if (listResult.blobs.length > 0) {
await Promise.all(
listResult.blobs.map(async (blob) => {
const res = await fetch(blob.url);
if (res.body) {
const parallelUploads3 = new Upload({
client: s3,
params: {
Bucket: "vercel-blob-backup",
Key: blob.pathname,
Body: Readable.fromWeb(res.body as ReadableStream),
},
leavePartsOnError: false,
});
await parallelUploads3.done();
}
})
);
}
cursor = listResult.cursor;
} while (cursor);
return new Response("Backup done!");
}
This script is optimized to not buffer all the files content into memory but to stream the content directly from Vercel Blob to the backup storage.
You can split your backup process into smaller chunks if you're hitting an execution limit. In this case you would save the cursor
to a database and resume the backup process from where it left off.
Was this helpful?