How-to

Vercel Modal IntegrationMarketplace

Learn how to integrate Modal with Vercel.
Table of Contents

Modal specializes in high-performance cloud computing for developers, focusing on deploying generative AI models and handling large-scale data workloads efficiently.

You can use the Vercel and Modal integration to power a variety of AI applications, including:

  • Generative AI models: Quickly deploy AI models for image, text, or music generation, utilizing Modal's efficient scaling capabilities
  • Batch processing: Execute large-scale batch jobs seamlessly, ideal for data analysis, processing, and transformation tasks
  • Voice generation: Create realistic, human-like voices for your applications, using Modal's advanced text-to-speech capabilities

Modal offers models that can be used for a variety of tasks, including:

Insanely Fast Whisper

Type: Audio

Customization of OpenAI's whisper-large-v3, insanely fast, powered by Hugging Face Transformers

Stable Diffusion XL

Type: Image

A text-to-image generative AI model that creates beautiful images

The Vercel Modal integration can be accessed through the AI tab on your Vercel dashboard.

To follow this guide, you'll need the following:

  1. Navigate to the AI tab in your Vercel dashboard
  2. Select Modal from the list of providers, and press Add
  3. Review the provider information, and press Add Provider
  4. You can now select which projects the provider will have access to. You can choose from All Projects or Specific Projects
    • If you select Specific Projects, you'll be prompted to select the projects you want to connect to the provider. The list will display projects associated with your scoped team
    • Multiple projects can be selected during this step
  5. Select the Connect to Project button
  6. You'll be redirected to the provider's website to complete the connection process
  7. Once the connection is complete, you'll be redirected back to the Vercel dashboard, and the provider integration dashboard page. From here you can manage your provider settings, view usage, and more
  8. Pull the environment variables into your project using Vercel CLI
    terminal
    vercel env pull .env.development.local
  9. Connect your project using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    Other frameworks
    app/api/route.ts
    // app/api/route.ts
    export async function POST(request: Request) {
    const jsonBody = await request?.json();
    const body = JSON.stringify({
    prompt: jsonBody.prompt || '',
    height: 768,
    width: 768,
    num_outputs: 1,
    negative_prompt: 'deformed, ugly',
    });
    const response = await fetch(
    'https://modal-labs--instant-stable-diffusion-xl.modal.run/v1/inference',
    {
    method: 'POST',
    headers: {
    Authorization: `Token ${process.env.MODAL_TOKEN_ID}:${process.env.MODAL_TOKEN_SECRET}`,
    'Content-Type': 'application/json',
    },
    body,
    },
    );
    if (response.status !== 201) {
    const message = await response.text();
    return Response.json({ message }, { status: 500 });
    }
    const imageBuffer = await response.arrayBuffer();
    return new Response(Buffer.from(imageBuffer), {
    headers: { 'Content-Type': 'image/png' },
    });
    }
  1. Add the provider to your page using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    app/page.tsx
    // app/page.tsx
    'use client';
    import { useState } from 'react';
    export default function Page() {
    const [prompt, setPrompt] = useState('A cute cartoon fox with a top-hat');
    const [imageUrl, setImageUrl] = useState<string | null>(null);
    const generateImage = async (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    // Making a POST request to Modal's instant Stable Diffusion endpoint
    try {
    const response = await fetch('/api', {
    method: 'POST',
    headers: {
    'Content-Type': 'application/json',
    },
    body: JSON.stringify({
    prompt,
    }),
    });
    if (response.status === 200) {
    const imageBuffer = await response.arrayBuffer();
    const blob = new Blob([imageBuffer], { type: 'image/png' });
    const imageUrl = URL.createObjectURL(blob);
    setImageUrl(imageUrl);
    } else {
    const message = await response.text();
    setImageUrl(null);
    console.error(`Error: ${message}`);
    }
    } catch (error) {
    console.error('Failed to fetch image:', error);
    }
    };
    return (
    <main className="flex min-h-screen flex-col items-center justify-between p-24">
    <form onSubmit={generateImage} className="flex gap-4 w-full">
    <input
    type="text"
    placeholder="Enter a prompt"
    value={prompt}
    onChange={(e) => setPrompt(e.target.value)}
    className="input w-full"
    />
    <button type="submit" className="btn border border-black bg-white">
    Generate Image
    </button>
    </form>
    {imageUrl && (
    <img
    src={imageUrl}
    alt="Generated"
    className="mt-4 max-w-full h-auto"
    />
    )}
    </main>
    );
    }
Last updated on July 31, 2024