Apr. 17th, 2024

Simplify your Kubernetes frontend deployment with Vercel

Extend your backend with the Frontend Cloud's managed infrastructure.

Maybe you’ve seen it happen: Your team is almost finished with a critical project when an infrastructure-related delay hits, throwing off your timeline by weeks. To make matters worse, a competitor launches a similar feature. Your customers, understandbaly frustrated, ask, “What’s the holdup?”

Or maybe you do launch on time. Your marketers built up the hype, and you’ve pre-provisioned extra server capacity for launch day to handle the traffic. Still, the surge of users is bigger than you expected, and now you’re seeing minutes-long outages crush potential business.

Let's be clear: these problems aren't your DevOps team's fault. Frontend complexity has exploded over the past decade. What used to be little more than static assets in an S3 bucket now requires layers of network infrastructure to scale and cache server-side JavaScript. Instead of a traditional CDN, you now need the full capabilities of an edge network to securely deliver your dynamic application to mobile-first users—all while maintaining nearly 100% uptime around the globe.

Here, Vercel can help. We're obsessed with automating the developer experience and intricate network infrastructure that modern frontends nees. As a dedicated Frontend Cloud that securely integrates with your existing Kubernetes backend, Vercel takes the hassle and unpredictability out of web frontends.

Decoupled frontends create faster cycles

The first step towards optimized developer and user experience is to separate your frontend and backend.

Monolithic architecture, where backend logic and application rendering are handled in one large codebase and deployed to a single Docker image, is often the go-to for the initial setup of a web application. As your application scales, however, challenges specific to monoliths come up:

  • Even small frontend changes often require repackaging and redeploying full container images, which drags out testing cycles and increases the time between completed code and when that code can go live.
  • Frontend and backend teams often find themselves intertwined, which can create bottlenecks as one team waits for the other to complete tasks.
  • Changes may also have ripple effects. Minor frontend modifications might require careful coordination with backend teams to ensure that there’s no impact on unrelated areas of the codebase.
  • Developers working on monoliths often need to juggle frontend frameworks, backend logic, and Kubernetes configuration, which can impact productivity, onboarding, and hiring.

The problem is simple, on its face: frontend development needs to move faster, with smaller cycles, than backend development. Managing your frontend inside the same Docker image as your backend can stifle innovation.

Decoupling your backend and frontend, whether inside Kubernetes or not, alleviates these specific challenges. Frontend and backend teams work independently with clearly defined API contracts, which boosts agility and protects sensitive backend data.

Most importantly, though, you get to optimize your frontend without backend constraints, which means you can start to choose frontend-specific tools that automate typical K8s pain points, such as scaling, caching, and testing environments.

Serverless frontends take the hassle out of scaling

While decoupling your frontend already offers significant benefits, managing it as a Kubernetes cluster still introduces scaling complexities and operational overhead. A serverless frontend architecture provides a better alternative with far more cost efficiency.

Think of it this way: In Kubernetes, your containerized app is your replicable unit because it consistently gives the same outputs for the same inputs. Because app containers are stateless, you can take one or more and put them in a pod, which Kubernetes can then replicate as much as needed to scale and get the job done.

However, app containers are still large units. When faced with high traffic and the need to horizontally scale, app containers can take a while to boot (time measured in seconds or minutes), during which time your customers can’t properly access your application. Containers also serve a predefined chunk of users, meaning it’s easy to under- or over-provision—both of which cost your business in different ways.

Serverless offers a far more granular way to horizontally scale since your smallest stateless unit becomes functions. The way these functions are packaged influences their startup times and resource efficiency, but optimized functions typically spin up in tens of milliseconds. Since they’re truly on-demand for each user, you don’t have to worry about pre-provisioning or traffic spikes. You instantly get infinite scale, zero scale, or anything in between, and you only pay for what you use.

Vercel's infrastructure scales automatically based on demand.
Vercel's infrastructure scales automatically based on demand.
Vercel's infrastructure scales automatically based on demand.
Vercel's infrastructure scales automatically based on demand.

Framework-defined infrastructure for full automation

Serverless has clear advantages for frontends, which experience the bulk of infrastructure unpredictability. However, creating and managing a custom serverless environment on a big cloud provider like AWS or Azure can waste valuable engineering effort that could otherwise be spent on optimizing existing infrastructure.

Infrastructure as Code (IaC), introduced nearly two decades ago, drastically sped up infrastructure management, offering greater consistency and automation compared to manual processes. However, IaC still entails significant complexity. Even with tools like Terraform or CloudFormation, correctly configuring and deploying serverless infrastructure is a time-consuming process fraught with potential for error.

This is where Vercel's Framework-defined Infrastructure (FdI) advances the NoOps narrative. FdI intelligently infers the necessary infrastructure directly from your frontend code, eliminating the need for intricate configuration files in the first place. Here's how it reimagines the approach:

  • IaC limitations: IaC requires you to explicitly define infrastructure components (servers, load balancers, networks, etc.) and their relationships. This involves a steep learning curve and the potential for mistakes, especially as complexity grows.
  • FdI automation: Vercel understands the structure of popular JavaScript frameworks or any custom tooling (such as React/Express.js) that opts into its Build Output API. Based on conventions like routes and data fetching methods, Vercel automatically provisions the correct infrastructure on its globally distributed serverless platform.
  • Realizing the NoOps vision: With FdI, infrastructure is no longer a primary concern for frontend development. Vercel dynamically scales serverless functions, optimizes content delivery at the edge, and secures deployments—all without tedious YAML or cluster configuration. DevOps teams can focus on core backend (Kubernetes) systems and improving developer experience across the business.
  • Rapid iteration cycles: Automatic infrastructure means feature development no longer waits on repetitive infrastructure setup and maintenance tasks. Freed from complexities, frontend teams get to work within familiar frameworks and iterate from design to deploy faster than ever.

The Frontend Cloud empowers development and distribution

So, what does Vercel’s managed frontend infrastructure entail? Since you already have your backend infrastructure handled, Vercel can offer a highly focused set of tools to make every stage of frontend development seamless.

Vercel brings frontend development into one cohesive environment, while still staying flexible enough to let you design and optimize your stack from the entire ecosystem of open- and closed-source JavaScript tooling.

Integrated developer tooling with unlimited deployment environments

With Vercel, all development and staging concerns are handled automatically due to tight integration with Git providers. Thanks to the ability of serverless to scale to zero when unused, development resources are truly unlimited.

  • Every code change (git push) results in an immutable preview deployment with Vercel’s production infrastructure.
  • Every Git branch can have its own synchronized environment variables and even a custom domain.
  • Every deployment environment can be shared with stakeholders from a unique Vercel URL, or through RBAC.
  • Every stakeholder can leave comments directly on the live preview, which can then integrate with issue trackers (Linear, Jira, Slack, etc.) to streamline code reviews.
  • Every deployment can trigger custom CI/CD actions in your Git provider for end-to-end testing to QA live data in your Kubernetes backend, with Vercel production conditions.
  • Every production deployment is as simple as a git merge—all the guesswork is taken out.
  • Finally, every live site can be instantly rolled back to any previous working deployment, if absolutely anything goes wrong.

Essentially, as long as your frontend teams know how to work effectively with Git version control, they already know how to use Vercel.

Secure, global distribution that decreases backend pressure

Distributing your application to a global audience comes with a huge set of challenges that Vercel solves—with a 99.99% uptime SLA—through a framework-defined, frontend-first focus. All Vercel deployments take advantage of the Edge Network, a specialized content deliver network (CDN) that can granularly cache and compute your framework code for optimal user experience.

This edge caching goes beyond typical CDNs because it can use Incremental Static Regeneration (ISR) to programmatically cache and revalidate any data in your application without a redeploy. Next.js, for example, allows for component-level granularity, meaning you can choose exactly which pieces of each page are statically revalidated and when. Any cached data can be instantly served to the user, directly from the edge closest to their global position.

Caching on Vercel’s Edge Network is both for latency and availability, so users don’t have to directly access your Kubernetes cluster on every request, which can greatly decrease backend pressure. This also means that, during temporary backend outages due to K8s autoscaling delays, your users can still access all data at the same high speed, albeit in a read-only fashion. However, in case of a longer provider outage, you can use Edge Config to instantly reroute users to another available backend.

Vercel’s serverless platform allows your app to scale to any size, serving millions of uncached user requests without performance degradation—as well as many times that in simultaneous cached requests. Uncached serverless functions can be optimized down to ~30ms of latency, and at enterprise scale, fewer than 0.2% of invocations across the platform are cold starts.

Serverless systems are also isolated by default, making your application more secure. Vercel Firewall also provides meaningful protection from common attack vectors such as DDoS or bot traffic, which obviates the need for any other frontend security provider, for the vast majority of enterprise applications.

Centralized feedback to monitor your application at work

So, how do you access this infrastructure? As mentioned above, almost all of these optimizations are automatic, based on the output of your application framework code. For those few additional tweaks, like security settings on deployments, you can find tools accessible within the Vercel dashboard.

The Vercel dashboard also offers a full suite of observability tooling where you can analyze the performance of your frontend and quickly be alerted to any potential issues. For any places in your application workflow or distribution where you need more specific tooling than what Vercel offers, you can easily integrate third-party providers.

Getting set up

Here’s a bird’s eye view of how Vercel’s infrastructure works with your existing backend:

  • Within Kubernetes, you’ll need to build out your API endpoints, containerize them, and then deploy them as pods exposed by a LoadBalancer service.
  • To secure an endpoint, you’ll want to implement mechanisms like JWT (JSON Web Tokens) or OAuth. For more complex setups with multiple APIs, you can consider an API gateway (like Kong or Ambassador) to manage routing, rate limiting, and security.
  • From any frontend on Vercel, you can use Vercel Secure Compute with VPC peering to establish a secure, private connection to your Kubernetes backend. Then, your serverless functions can communicate seamlessly with your backend APIs, as they would from any database.
  • You also have the option to orchestrate your frontend and backend together, which can provide a streamlined, version-controlled approach to managing your entire infrastructure. Tools like Terraform excel in managing infrastructure as code, including the configuration of Vercel Secure Compute alongside your Kubernetes resources.

For larger, complex codebases, our sales team offers personalized guidance and support. We then work with you to ensure a smooth transition during migration, helping you tailor the architecture and configuration to your specific requirements.

Delighting your developers and end-users

When you compare deploying a frontend on Kubernetes to deploying a frontend on Vercel, it’s a battle of completely manual configuration versus true automation for every step of the development and deployment lifecycle.

This enhanced developer experience creates a positive feedback loop for your entire business:

  • Platform engineers can focus on optimization, rather than constantly building new infrastructure.
  • Frontend devs no longer have to master the whole stack, and they can choose the JavaScript tooling that suits them.
  • All teams solve even the toughest challenges through vastly increased iteration velocity.
  • Your business can attract and onboard new talent much more easily.

All of this results in better features for end-users, faster.

Explore more