Compute on Vercel
At Vercel, we use compute to describe actions such as (but not limited to) building and rendering. Learn more about compute on Vercel with this guide."Compute" is an encompassing term used to describe the actions taken by a computer. When we talk about it with regards to web development and at Vercel, we use compute to describe actions such as (but not limited to) building and rendering - essential operations needed to turn your code into a site that appears for users.
When you build your application, the build tools in your framework transform your code into production-optimized files, which are ready to be deployed to servers and delivered to users. These files include:
- HTML files for statically generated pages
- JavaScript code for rendering pages on the server
- JavaScript code for making pages interactive on the client (e.g. like a button)
- CSS files for styling pages
Having these files is only the first step. For your site to be consumed by your users, it must be rendered. Rendering converts the code you write into the HTML representation of your UI, which the user's browser can display. Rendering can occur on a server or the client and can happen either ahead of time at build-time or during runtime when the user has requested the site.
The next section takes a closer look at where this compute happens, and describes some of the advantages and disadvantages.
Traditionally with web applications, we talk about two main locations:
- Client – This is the browser on your user's device that sends a request to a server for your application code. It then turns the response it receives from the server into an interface the user can interact with.
- Server – This is the computer in a data center that stores your application code. It receives requests from a client, does some computation, and sends back an appropriate response.
Whether the computation happens on the client or server is dependent on the method of content generation that you use:
With Next.js apps (and the same for comparable frameworks), a combination of these methods are used. For example, the initial HTTP response returns HTML, while all subsequent navigations are client-side-rendered.
When a user makes a request to your site, the server responds with an empty HTML shell along with the JavaScript instructions to construct the UI. This response can be cached by a CDN and delivered quickly to the user. However, they still won't be able to see your site at this stage. The user's browser is then responsible for the initial rendering based on the JavaScript instructions. CSR has the perception of being fast and allows for dynamic interactions. However, performance and capabilities are limited based on the user's device.
When a user makes a request to your site, your server — usually in "the cloud"— generates the HTML at that time (runtime) and then returns the HTML, JSON data, and JavaScript instructions back to the client's browser.
The HTML is used by the browser to show a fast non-interactive page, while the JSON and JavaScript make components interactive.
With SSR, the server is running all the time, which can be costly. While you have flexibility over the scale and resources, this is also a potential overhead. Most importantly, with Vercel, your server is in one fixed location, meaning that your users could be far away from the server, which means greater latency for them seeing your site.
Tangentially related to SSR is Serverless Functions. These functions, which also run in one specified location (or region), allow you to write small chunks of code to provide additional functionality in your application, such as handle authentication, form submissions, and database queries.
When a user makes a request to your site, a serverless function will run on-demand, without you needing to manage the infrastructure, provision servers, or upgrade hardware.
When a Serverless Function boots up from scratch, that is known as a cold boot. When it is re-used, we can say the function was warm.
Re-using a function means the underlying container that hosts it does not get discarded. State, such as temporary files, memory caches, sub-processes, is preserved. This empowers the developer not just to minimize the time spent in the booting process, but to also take advantage of caching data (in memory or filesystem) and memoizing expensive computations.
It is important to note that Serverless Functions, even while the underlying container is hot, cannot leave tasks running. If a sub-process is running by the time the response is returned, the entire container is frozen. When a new invocation happens, if the container is re-used, it is unfrozen, which allows sub-processes to continue running.
With Static Site Generation, all of the pages in your site are built ahead of time on the server. The HTML that is generated is then cached on a server (or CDN), so when a user makes a request to your site, the content can be delivered with no additional runtime. With SSG the compute time is separate to the user's request. This removes compute time from the user's request, i.e. they don't need to wait for the site to be built, just delivered to them. While one of the benefits here is speed, a downside of doing this, traditionally, is that you can't have dynamic content (unless you use client-side compute) or personalization. However, you can now use Vercel Edge Middleware as one mechanism for enabling personalization on your static content.
As we've described in the previous section, the computation of your app, the cache where it is stored, or where your app runs can happen on a server in numerous places. Usually, there are three places your code can be stored:
- Origin Server – The server that stores and runs the original version of your app code. When the origin server receives a request, it does some computation before sending a response. The result of this computation work may be moved to a CDN.
- CDN (Content Delivery Network) – This stores static content, such as HTML, in multiple locations around the globe, placed between the client who is requesting and the origin server that is responding. When a user sends a request, the closest CDN will respond with its cached response.
- The Edge – The edge refers to the edge of the network, closest to the user. While CDNs could be part of the edge, which are also distributed around the world, some Edge servers can also run code. This means that caching and code execution can be done at the edge, closer to the user. On Vercel, there are two ways to deploy to the Edge Network: Edge Middleware and Edge Functions.
In the previous section, we talked about how the Edge works and its benefits. Edge Middleware and Edge Functions are two ways developers who deploy their app on Vercel can take advantage of Edge infrastructure.
Middleware is a type of function that sits on the Edge Network, before your cached content. When a user makes a request to your site, it first hits the middleware which will access the request then send an appropriate response from the server. Some examples where middleware are useful:
- Authentication – e.g. is the user logged in? If so, they'll be able to access the content. If not, you can direct them to a different page.
- Geolocation – e.g. where is the user located? If they're located in the EU, you can smartly display additional privacy warnings.
- Language – e.g. what language is the users system in? If they're in the US, we can send them to the English version of the site, If they're in Germany, we can send them to the German version of the content.
- A/B testing – e.g. what cohort are they in? If you're A/B testing a page, you can automatically display the correct page, without any lag or flicker.
- Bot identification – e.g. are they a bot?
Because middleware runs before the cache, it's an effective way of providing personalization to statically generated content. In essence, the middleware is deciding which version of an entire page to show your users. This allows you to deliver pre-rendered, personalized content to users with very low latency because it's running on the edge. As a developer, this gives you more control over the user experience, without bloating the size of your client application and providing sub-par performance.
Edge Functions work in a very similar way to Serverless Functions, but instead of running on a single region, they are copied across the Edge Network and so every time the function is invoked, the region closest to the request will run the function. This results in a much lower latency, and combined with zero cold-start time, allows you to provide personalization at speed.
Edge Functions run after the cache and so are ideal to be used on specific, dynamic parts of your site once the page is loaded, such a date-picker with availability or a weather component on your site. This response can be cached on the Edge Network making future invocations even faster.
It is important to note that Edge Functions are just one solution and not a "one size fits all" solution. It is possible that the database for your site sits far from the Edge server. That means that even though the edge function can be invoked quickly, it might take twice as long to get the data than if the function was located closer to the data. In this scenario, you may want to use a Serverless Function.
See the regional Edge Functions invocation documentation to learn more.
Was this helpful?