vercel-logotype Logovercel-logotype Logo
    • Frameworks
      • Next.js

        The native Next.js platform

      • Turborepo

        Speed with Enterprise scale

      • AI SDK

        The AI Toolkit for TypeScript

    • Infrastructure
      • CI/CD

        Helping teams ship 6× faster

      • Delivery network

        Fast, scalable, and reliable

      • Fluid compute

        Servers, in serverless form

      • AI Infrastructure

        AI Gateway, Sandbox, and more

      • Observability

        Trace every step

    • Security
      • Platform security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

      • Bot management

        BotID, Bot Protection

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

    • Tools
      • Resource Center

        Today’s best practices

      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Guides

        Find help quickly

      • Partner Finder

        Get help from solution partners

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

  • Enterprise
  • Docs
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No results found for "".
    Try again with a different keyword.

    Featured articles

  • May 20

    Introducing the AI Gateway

    The Vercel AI Gateway is now available for alpha testing. Built on the AI SDK 5 alpha, the Gateway lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts. The Gateway handles authentication, usage tracking, and in the future, billing. Get started with AI SDK 5 and the Gateway, or continue reading to learn more. Why we’re building the AI Gateway The current speed of AI development is fast and is only getting faster. There's a new state-of-the-art model released almost every week. Frustratingly, this means developers have been locked into a specific provider or model API in their application code. We want to help developers ship fast and keep up with AI progress, without needing 10 different API keys and provider accounts. Prod...

    Walter and Lars
  • May 1

    iOS developers can now offer commission-free payments on web

    Yesterday, a federal court made a decisive ruling in Epic Games v. Apple: Apple violated a 2021 injunction by continuing to restrict developers from linking to external payment methods, and by imposing a 27% fee when they did. The ruling represents a major shift for native app developers. Why does it matter? Apple’s App Store has operated as a tightly controlled marketplace. Until now, developers couldn’t even tell users they could pay elsewhere. Apple’s 30% cut (the so-called "Apple Tax") meant higher prices for consumers, smaller margins for developers, and less freedom overall. After the 2021 injunction, Apple introduced a system called StoreKit External Purchase Link API. This surfaced a system disclosure sheet, a "scare screen," warning users that they were about to leave th...

    Fernando Rojo
  • Jun 4

    The no-nonsense approach to AI agent development

    AI agents are software systems that take over tasks made up of manual, multi-step processes. These often require context, judgment, and adaptation, making them difficult to automate with simple rule-based code. While traditional automation is possible, it usually means hardcoding endless edge cases. Agents offer a more flexible approach. They use context to decide what to do next, reducing manual effort on tedious steps while keeping a review process in place for important decisions. The most effective AI agents are narrow, tightly scoped, and domain-specific. Here's how to approach building one.

    Malte Ubl

    Latest news.

  • Company News
    Jun 26

    Vercel Ship 2025 recap

    My first week at Vercel coincided with something extraordinary: Vercel Ship 2025. Vercel Ship 2025 showcased better building blocks for the future of app development. AI has made this more important than ever. Over 1,200 people gathered in NYC for our third annual event, to hear the latest updates in AI, compute, security, and more.

    Keith Messick
  • General
    Jun 25

    Introducing Active CPU pricing for Fluid compute

    Fluid compute exists for a new class of workloads. I/O bound backends like AI inference, agents, MCP servers, and anything that needs to scale instantly, but often remains idle between operations. These workloads do not follow traditional, quick request-response patterns. They’re long-running, unpredictable, and use cloud resources in new ways. Fluid quickly became the default compute model on Vercel, helping teams cut costs by up to 85% through optimizations like in-function concurrency. Today, we’re taking the efficiency and cost savings further with a new pricing model: you pay CPU rates only when your code is actively using CPU.

    Dan and Mariano
  • General
    Jun 25

    ​Introducing BotID, invisible bot filtering for critical routes

    Modern sophisticated bots don’t look like bots. They execute JavaScript, solve CAPTCHAs, and navigate interfaces like real users. Tools like Playwright and Puppeteer can script human-like behavior from page load to form submission. Traditional defenses like checking headers or rate limits aren't enough. Bots that blend in by design are hard to detect and expensive to ignore. Enter BotID: A new layer of protection on Vercel. Think of it as an invisible CAPTCHA to stop browser automation before it reaches your backend. It’s built to protect critical routes where automated abuse has real cost such as checkouts, logins, signups, APIs, or actions that trigger expensive backend operations like LLM-powered endpoints.

    +2
    Jen, Andrew, and 2 others
  • Company News
    Jun 24

    WPP and Vercel: Bringing AI to the creative process

    Today, we're announcing an expansion of our partnership with WPP. A first-of-its-kind agency collaboration that now brings v0 and AI SDK directly to WPP's global network of creative teams and their clients.

    Jen Chang
  • General
    Jun 23

    Keith Messick joins Vercel as CMO

    Vercel is evolving to meet the expanding potential of AI while staying grounded in the principles that brought us here. We're extending from frontend to full stack, deepening our enterprise capabilities, and powering the next generation of AI applications, including integrating AI into our own developer tools. Today, we’re welcoming Keith Messick as our first Chief Marketing Officer to support this growth and (as always) amplify the voice of the developer.

    Jeanne Grosser
  • Customers
    Jun 16

    Tray.ai cut build times from a day to minutes with Vercel

    Tray.ai is a composable AI integration and automation platform that enterprises use to build smart, secure AI agents at scale. To modernize their marketing site, they partnered with Roboto Studio to migrate off their legacy solution and outdated version of Next.js. The goal: simplify the architecture, consolidate siloed repos, and bring content and form management into one unified system. After moving to Vercel, builds went from a full day to just two minutes.

    Peri Langlois
  • Engineering
    Jun 12

    Building efficient MCP servers

    The Model Context Protocol (MCP) standardizes how to build integrations for AI models. We built the MCP adapter to help developers create their own MCP servers using popular frameworks such as Next.js, Nuxt, and SvelteKit. Production apps like Zapier, Composio, Vapi, and Solana use the MCP adapter to deploy their own MCP servers on Vercel, and they've seen substantial growth in the past month. MCP has been adopted by popular clients like Cursor, Claude, and Windsurf. These now support connecting to MCP servers and calling tools. Companies create their own MCP servers to make their tools available in the ecosystem. The growing adoption of MCP shows its importance, but scaling MCP servers reveals limitations in the original design. Let's look at how the MCP specification has evolved, and how the MCP adapter can help.

    Andrew Qu
  • General
    Jun 11

    Designing and building the Vercel Ship conference platform

    Our two conferences (Vercel Ship and Next.js Conf) are our chance to show what we've been building, how we're thinking, and cast a vision of where we're going next. It's also a chance to push ourselves to create an experience that builds excitement and reflects the quality we strive for in our products. For Vercel Ship 2025, we wanted that experience to feel fluid and fast. This is a look at how we made the conference platform and visuals, from ferrofluid-inspired 3D visuals and generative AI workflows, to modular component systems and more.

    +2
    Genny, Daniel, and 2 others
  • General
    Jun 10

    How we’re adapting SEO for LLMs and AI search

    Search is changing. Backlinks and keywords aren’t enough anymore. AI-first interfaces like ChatGPT and Google’s AI Overviews now answer questions before users ever click a link (if at all). Large language models (LLMs) have become a new layer in the discovery process, reshaping how, where, and when content is seen. This shift is changing how visibility works. It’s still early, and nobody has all the answers. But one pattern we're noticing is that LLMs tend to favor content that explains things clearly, deeply, and with structure. "LLM SEO" isn’t a replacement for traditional search engine optimization (SEO). It’s an adaptation. For marketers, content strategists, and product teams, this shift brings both risk and opportunity. How do you show up when AI controls the first impression, but not lose sight of traditional ranking strategies? Here’s what we’ve noticed, what we’re trying, and how we’re adapting.

    Kevin and Malte
  • Engineering
    Jun 9

    Building secure AI agents

    An AI agent is a language model with a system prompt and a set of tools. Tools extend the model's capabilities by adding access to APIs, file systems, and external services. But they also create new paths for things to go wrong. The most critical security risk is prompt injection. Similar to SQL injection, it allows attackers to slip commands into what looks like normal input. The difference is that with LLMs, there is no standard way to isolate or escape input. Anything the model sees, including user input, search results, or retrieved documents, can override the system prompt or event trigger tool calls. If you are building an agent, you must design for worst case scenarios. The model will see everything an attacker can control. And it might do exactly what they want.

    Malte Ubl
  • Engineering
    Jun 4

    The no-nonsense approach to AI agent development

    AI agents are software systems that take over tasks made up of manual, multi-step processes. These often require context, judgment, and adaptation, making them difficult to automate with simple rule-based code. While traditional automation is possible, it usually means hardcoding endless edge cases. Agents offer a more flexible approach. They use context to decide what to do next, reducing manual effort on tedious steps while keeping a review process in place for important decisions. The most effective AI agents are narrow, tightly scoped, and domain-specific. Here's how to approach building one.

    Malte Ubl
  • Engineering
    Jun 1

    Introducing the v0 composite model family

    We recently launched our AI models v0-1.5-md and v0-1.5-lg in v0.dev and v0-1.0-md via API. Today, we're sharing a deep dive into the composite model architecture behind those models. They combine specialized knowledge from retrieval-augmented generation (RAG), reasoning from state-of-the-art large language models (LLMs), and error fixing from a custom streaming post-processing model. While this may sound complex, it enables v0 to achieve significantly higher quality when generating code. Further, as base models improve, we can quickly upgrade to the latest frontier model while keeping the rest of the architecture stable.

    +2
    Aryaman, Gaspar, and 2 others

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Products

  • AI
  • Enterprise
  • Fluid Compute
  • Next.js
  • Observability
  • Previews
  • Rendering
  • Security
  • Turbo
  • v0

Resources

  • Community
  • Docs
  • Guides
  • Help
  • Integrations
  • Pricing
  • Resources
  • Solution Partners
  • Startups
  • Templates

Company

  • About
  • Blog
  • Careers
  • Changelog
  • Events
  • Contact Us
  • Customers
  • Partners
  • Privacy Policy

Social

  • GitHub
  • LinkedIn
  • Twitter
  • YouTube

Loading status…

Select a display theme: