Recent Posts

Loading Fast and Slow: async React Server Components and Suspense

When the web was young, HTML pages were served to clients running web browser software that would turn the HTML text response into rendered pixels on the screen. At first these were static HTML files, but then things like PHP and others came along to allow the server to customize the HTML sent to each client.

CSS came along to change the appearance of what got rendered. JavaScript came along to make the page interactive. Suddenly the page was no longer the atomic unit of the web experience: pages could modify themselves right there inside the browser, without the server being in the loop at all.

This was good because the network is slow and less than 100% reliable. It heralded a new golden age for the web. Progressively, less and less of the HTML content was sent to clients as pre-rendered HTML, and more and more was sent as JSON data that the client would render into HTML using JavaScript.

This all required a lot more work to be done on the client, though, which meant the client had to download a lot more JavaScript. Before long we were shipping MEGABYTES of JavaScript down to the web browser, and we lost the speediness we had gained by not reloading the whole page all the time. Page transitions were fast, but the initial load was slow. Megabytes of code shipped to the browser can multiply into hundreds of megabytes of device memory consumed, and not every device is your state of the art Macbook Pro.

Single Page Applications ultimately do the same thing as that old PHP application did - render a bunch of HTML and pass it to the browser to render. The actual rendered output is often a few kilobytes of plain text HTML, but we downloaded, parsed and executed megabytes of JavaScript to generate those few kilobytes of HTML. What if there was a way we could keep the interactivity of a SPA, but only send the HTML that needs to be rendered to the client?

Enter React Server Components

React Server Components are one of the biggest developments in React for years, with the potential to solve many of these problems. RSCs allow us to split our page rendering into two buckets - Components rendered on the client (traditional React style) and components rendered on the server (traditional web style).

Let's say we're building an application to help us manage devices, so we want some CRUD. Probably we're going to have a Devices index page where we can look at the list of Devices, and then either click on one to see the details, or click a button to create a new one. We might also want to edit or delete devices.

In the traditional React client-side mindset, we would build ourselves a page that will be rendered in the browser - it will need to fetch the Devices data from our backend, wait until the response comes back, handle any errors, and then render the list of devices. We might use a library like SWR to handle the fetching and caching of the data, and we might use a library like React Query to handle the mutation of the data. You've probably written this component a thousand times.

Maybe we'd end up with something that looks like this:

Example of a basic Devices CRUD index screen
It's a list of devices, with a button to add a new one.

You've seen the code to do this on the client side a thousand times before, with all its useState, useEffect, fetch, try/catch and other boilerplate. It's easy to create bugs in this code, to forget to handle edge cases, and to end up with a page that doesn't work as expected. What if we could write it like this instead?

app/devices/page.tsx
import { getDevices } from "@/models/device";

import { Heading } from "@/components/common/heading";
import { Button } from "@/components/common/button";
import DevicesTable from "@/components/device/table";

export default async function DevicesPage() {
const devices = await getDevices();

return (
<div>
<div className="flex w-full flex-wrap items-end justify-between gap-4 pb-6">
<Heading>Devices</Heading>
<div className="flex gap-4">
<Button href="/devices/create">Add Device</Button>
</div>
</div>
<DevicesTable devices={devices} />
</div>
);
}
app/devices/page.tsx
import { getDevices } from "@/models/device";

import { Heading } from "@/components/common/heading";
import { Button } from "@/components/common/button";
import DevicesTable from "@/components/device/table";

export default async function DevicesPage() {
const devices = await getDevices();

return (
<div>
<div className="flex w-full flex-wrap items-end justify-between gap-4 pb-6">
<Heading>Devices</Heading>
<div className="flex gap-4">
<Button href="/devices/create">Add Device</Button>
</div>
</div>
<DevicesTable devices={devices} />
</div>
);
}

This is a React Server Component. In this brave new world, you can tell it's a server component because the file doesn't start with 'use client'. RSCs are still pretty new and only supported in frameworks like NextJS that have a server-side rendering capability. By default, all components in NextJS are server components unless your file starts with 'use client'.

The main thing this component is doing is fetching device data via the getDevices function, which is all running on the server side and probably reading from a database. By doing this on the server, we avoid a) an extra HTTP round-trip to fetch the data separately from the React component, and b) all of the client-side logic required to make that work. Our code is clean and simple, with the magic of async/await making it read as though its synchronous, which is easier on human brains.

Let's have a quick look at the layout.tsx file that this component is rendering into:

layout.tsx
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body>
{children}
</body>
</html>
);
}
layout.tsx
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body>
{children}
</body>
</html>
);
}

Ok that's about as basic as it gets. The RootLayout component is also a React Server Component - it gets rendered on the server and the resulting HTML is sent to the client. When we visit the /devices URL, the server will render the app/devices/page.tsx component and shove it where we put {children} in the layout.tsx file.

But there's a wrinkle here - our DevicesPage component is defined as an async function. That's because, in this case, we need to make some asynchronous calls to fetch the data we need to render the page. So of course it's got to be async, but how does that mesh with our synchronous rendering of the layout and returning of the response to the client?.

Well, by default, it means that the server will have to wait for the async DevicesPage function to finish before it can render the page and send it to the client. If our database lookup is slow, this means the user is sat looking at a completely blank screen for a while. Not a great user experience.

To convince you of this, I created a skeleton Next JS application that is currently running at https://rsc-suspense-patterns.edspencer.net/. It has 5 pages, all of which are React Server Components, and all of which have different treatments of the async data fetching. The code for this application is available at https://github.com/edspencer/rsc-suspense-patterns.

Vanilla async Server Component rendering

The first page in my little skeleton app is at https://rsc-suspense-patterns.edspencer.net/slow/no-suspense - the best thing to do is open that in a new window at watch it load. You'll see nothing happen for 3 seconds, then suddenly the whole page appears at once. This is because the page.tsx for that URL is exactly what I show you in the code block above - an async function that fetches some data and then returns it. The call to getDevices there just waits 3 seconds before returning a static array of fake data.

This page feels broken, right? Nothing happens for 3 seconds, which is more than enough time to make a user think the page is broken and leave. With React Suspense, though, we can do better than this, starting with the next page in my little app.

Page-level Suspense boundaries with loading.tsx

Next.JS provides a nice little convention for providing page-level Suspense behavior, including React Server Component pages. Suspense, if you're not familiar with it, is a way for your React application to render everything that it can, show that to the user, and when the rest of the components on the page are ready to be rendered, stream them into the browser.

With Next.JS, we can just create a loading.tsx file in the same directory as our page.tsx file, and it will be used as a fallback while the page is loading. This is a great way to show a loading spinner or other loading indicator to the user while the page is loading. Here's how simple that can be:

app/slow/suspense/loading.tsx
export default function Loading() {
return (
<div className="flex justify-center items-center h-64">
<div className="animate-spin rounded-full h-16 w-16 border-b-2 border-gray-900"></div>
</div>
);
}
app/slow/suspense/loading.tsx
export default function Loading() {
return (
<div className="flex justify-center items-center h-64">
<div className="animate-spin rounded-full h-16 w-16 border-b-2 border-gray-900"></div>
</div>
);
}

Just by defining this file, Next.js did a little work under the covers, resulting in the following behavior:

  • When the page first loads, the page.tsx component rendering is initiated, but doesn't render immediately
  • While that async function is fetching data/doing whatever else before rendering, the loading.tsx component is rendered instead
  • When the async function finishes, the page.tsx component is rendered and replaces the loading.tsx component

You can see this in action at https://rsc-suspense-patterns.edspencer.net/slow/suspense. Again, to really see what is going on there, open the link in a brand new browser tab/window. This time, we get the page header menu rendering immediately - it is part of layout.tsx, and for 3 seconds we see our loading.tsx render - a spinner in this case. After 3 seconds, the page.tsx component renders and replaces the spinner:

Example of a basic Devices CRUD index screen
Our outer layout appears immedaitely; the page content is just our loading spinner

Component-level suspense boundaries

Page-level Suspense boundaries are an improvement to our vanilla version because at least we're rendering some of our application immediately, and showing the user that something is happening via a loading spinner. It's also super-easy to just drop a loading.tsx file into a component directory and have it work.

But we can do better than that. We can use Suspense boundaries at the component level to show the user that something is happening at a more granular level. Here's the actual source code that powers the third and final slow loading RSC page in my demo - which you can see live at https://rsc-suspense-patterns.edspencer.net/slow/component-suspense:

app/slow/component-suspense/page.tsx
import { getDevices } from "@/models/device";

import Heading from "@/components/common/heading";
import DevicesTable from "@/components/device/table";
import AddDeviceButton from "@/components/device/button";

import Loading from "@/components/common/loading";
import { Suspense } from "react";

export default function DevicesPage() {
return (
<>
<div className="flex w-full flex-wrap items-end justify-between gap-4 pb-6">
<Heading>Devices (3000ms database call, Component-level Suspense)</Heading>
<div className="flex gap-4">
<AddDeviceButton />
</div>
</div>
<Suspense fallback={<Loading />}>
<LoadedDevicesTable />
</Suspense>
<p>
On this screen, we get all of the page contents rendered instantly (including this paragraph),
but see a loading spinner while the table is loaded, rendered, and streamed back to the client.
</p>
</>
);
}

async function LoadedDevicesTable() {
const devices = await getDevices();

return <DevicesTable devices={devices} />;
}
app/slow/component-suspense/page.tsx
import { getDevices } from "@/models/device";

import Heading from "@/components/common/heading";
import DevicesTable from "@/components/device/table";
import AddDeviceButton from "@/components/device/button";

import Loading from "@/components/common/loading";
import { Suspense } from "react";

export default function DevicesPage() {
return (
<>
<div className="flex w-full flex-wrap items-end justify-between gap-4 pb-6">
<Heading>Devices (3000ms database call, Component-level Suspense)</Heading>
<div className="flex gap-4">
<AddDeviceButton />
</div>
</div>
<Suspense fallback={<Loading />}>
<LoadedDevicesTable />
</Suspense>
<p>
On this screen, we get all of the page contents rendered instantly (including this paragraph),
but see a loading spinner while the table is loaded, rendered, and streamed back to the client.
</p>
</>
);
}

async function LoadedDevicesTable() {
const devices = await getDevices();

return <DevicesTable devices={devices} />;
}

We've done three things here:

  1. We split the loading and rendering of the <DevicesTable> into a separate (async) component called <LoadedDevicesTable>
  2. We made our DevicesPage component synchronous, so it renders immediately
  3. We wrapped our new <LoadedDevicesTable> component in a <Suspense> component, with a fallback prop that renders our loading spinner
RSC pages with Component-level suspense
Now the entire page renders instantly, except for the data table

If you open up the live demo page, you'll see that the entire page renders instantly, including the header and footer, and the paragraph explaining what's going on. The only thing that doesn't render immediately is the data table, which shows a loading spinner until the data is fetched and the table is rendered.

This is a much better user experience than the vanilla version, and even the page-level Suspense version. It's a great way to show the user that something is happening, and that the page isn't broken, while still rendering as much of the page as possible immediately. Adding a <Suspense> wrapper is every bit as easy as adding a loading.tsx file, and will often produce a better user experience.

Now your application is ~90% rendering on the server side, using React Server Components, and only the interactive parts are rendered on the client side. This is a great way to get the best of both worlds - the speed and reliability of server-side rendering, and the interactivity of client-side rendering.

Implications for React Server Components

Generally speaking, if a page requires several database/RPC calls to load its data, it will usually be significantly faster to render that page on the server side than on the client side. This is because the server usually has a fast, low-latency connection to the database, and can render the page in a single pass.

But this is not a panacea - databases that started out fast often become slow over time. UX patterns (like not using Suspense) that made total sense with a 10ms data fetch can become a problem when that fetch takes 3000ms or more. If you start to one or more of those slow data fetches on a page, you're not going to be giving your users a great experience if you use async React Server Components at the page level.

Consider making page-level RSCs synchronous

The approach in the code block below (which is the same approach as above) is one way to get around that, where we split the async code out of the Page component. By confining ourselves to rendering only synchronous components at the page level, we can render the page immediately and then stream in the async components as they're ready. This is a great way to give the user a sense of progress and keep them engaged with the page.

app/my-lovely-horse/page.tsx
import Loading from "@/components/common/loading";
import { Suspense } from "react";

//synchronous - fast!
export default function FastRSCPage() {
return (
<>
<h2>My lovely page</h2>
<Suspense fallback={<Loading />}>
<SlowLoadingComponent />
</Suspense>
</>
);
}

//async - can be slow but doesn't matter as it's not at the page level
async function SlowLoadingComponent() {
const devices = await getDevices();

return <DevicesTable devices={devices} />;
}
app/my-lovely-horse/page.tsx
import Loading from "@/components/common/loading";
import { Suspense } from "react";

//synchronous - fast!
export default function FastRSCPage() {
return (
<>
<h2>My lovely page</h2>
<Suspense fallback={<Loading />}>
<SlowLoadingComponent />
</Suspense>
</>
);
}

//async - can be slow but doesn't matter as it's not at the page level
async function SlowLoadingComponent() {
const devices = await getDevices();

return <DevicesTable devices={devices} />;
}

In this approach, our <FastRSCPage> and <SlowLoadingComponent> components are both still React Server Components. They even happen to be in the same file, though they don't have to be. It's just that splitting the async code out of our top-level component (the "page") means that we can render as much of the UI as possible, essentially instantly.

Page Interactivity waits for Suspense (sometimes)

Our little page has an Add Device button, which is the only 'use client' component in the entire app. All it does in this demo is fire an alert, which ought to convince you it is a component running in the browser.

But if you open up https://rsc-suspense-patterns.edspencer.net/slow/component-suspense and click the Add Device button while the spinner is still spinning, nothing happens. Click it again after the spinner goes away, and you'll see the alert. This might be a little unexpected - the button is in the synchoronous part of the page, not within the Suspense boundary, so why doesn't it work?

I actually don't know. React 18 came along with an excellent post explaining how Suspense is supposed to work, including Selective Hydration. Hydration is when you render your page HTML on the server side, the client downloads it, then React spins up in the client and attaches itself to all that lovely HTML the server sent down. Until Hydration is complete, your React app may be mostly rendered, but it is not interactive.

Selective Hydration is supposed to enable React to automatically hydrate the parts of your application that are fully rendered, running hydration again for any components inside <Suspense> boundaries that were not ready the first time hydration occurred.

This should mean that the Add Device button is interactive as soon as the page is hydrated, even if the data table is still loading. As you'll note, it doesn't seem to actually do that, so watch out for behavior like this in your own apps. All of this stuff is pretty new, so it's possible that there are still some bugs to be ironed out. If I figure that out I'll let you know.

Conclusions and further reading

React Server Components are a powerful new feature in React that can be a game-changer for the UX of your applications when implemented correctly. They're also a Big Rewrite trap that could seem annoying if you have thousands of hours invested in a React app that works the Old Way. But if you're starting a new project, or have a project that's not working as well as you'd like, they're definitely worth a look.

I read some excellent posts by some fine folks while embarking on my own journey of understanding around this topic - here are three articles on RSC that you should consider reading:

Continue reading

Using Server Actions with Next JS

React and Next.js introduced Server Actions a while back, as a new/old way to call server-side code from the client. In this post, I'll explain what Server Actions are, how they work, and how you can use them in your Next.js applications. We'll look at why they are and are not APIs, why they can make your front end code cleaner, and why they can make your backend code messier.

Everything old is new again

In the beginning, there were <form>s. They had an action, and a method, and when you clicked the submit button, the browser would send a request to the server. The server would then process the request and send back a response, which could be a redirect. The action was the URL of the server endpoint, and the method was usually either GET or POST.

<form action="/submit" method="POST">
<input type="text" name="name" />
<button type="submit">Submit</button>
</form>
<form action="/submit" method="POST">
<input type="text" name="name" />
<button type="submit">Submit</button>
</form>

Then came AJAX, and suddenly we could send requests to the server without reloading the page. This was a game-changer, and it opened up a whole new world of possibilities for building web applications. But it also introduced a lot of complexity, as developers had to manage things like network requests, error handling, and loading states. We ended up building React components like this:

TheOldWay.jsx
//this is just so 2019
export default function CreateDevice() {
const [name, setName] = useState('');
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);

const handleSubmit = async (e) => {
e.preventDefault();
setLoading(true);
try {
await fetch('/api/devices', {
method: 'POST',
body: JSON.stringify({ name }),
headers: {
'Content-Type': 'application/json',
},
});
} catch (err) {
setError(err);
} finally {
setLoading(false);
}
};

return (
<form onSubmit={handleSubmit}>
<input type="text" value={name} onChange={(e) => setName(e.target.value)} />
<button type="submit" disabled={loading}>Submit</button>
{error && <p>{error.message}</p>}
</form>
);
}
TheOldWay.jsx
//this is just so 2019
export default function CreateDevice() {
const [name, setName] = useState('');
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);

const handleSubmit = async (e) => {
e.preventDefault();
setLoading(true);
try {
await fetch('/api/devices', {
method: 'POST',
body: JSON.stringify({ name }),
headers: {
'Content-Type': 'application/json',
},
});
} catch (err) {
setError(err);
} finally {
setLoading(false);
}
};

return (
<form onSubmit={handleSubmit}>
<input type="text" value={name} onChange={(e) => setName(e.target.value)} />
<button type="submit" disabled={loading}>Submit</button>
{error && <p>{error.message}</p>}
</form>
);
}

This code is fine, but it's a lot of boilerplate for something as simple as submitting a form. It's also not very readable, as the logic for handling the form submission is mixed in with the UI code. Wouldn't it be nice if we could go back to the good old days of <form>s, but without the page reload?

Enter Server Actions

Now, with Server Actions, React is bringing back the simplicity of the old days, while still taking advantage of the power of modern web technologies. Server Actions allow you to call server-side code from the client, just like you would with a traditional form submission, but without the page reload. It wants you to think that this is all happening without an API on the backend, but this isn't true. It's not magic after all.

Here's how we can write the same form using Server Actions:

app/components/AddDeviceForm.tsx
'use client';
import { useFormState } from 'react-dom';
import { createDeviceAction } from '@/app/actions/devices';

export function AddDeviceForm() {
const [state, formAction] = useFormState(createDeviceAction, {});

return (
<form action={formAction} className="create-device">
<fieldset>
<label htmlFor="name">Name:</label>
<input type="text" name="name" id="name" placeholder="type something" />
<button type="submit">Submit</button>
</fieldset>
{state.status === 'error' && <p className="text-red-500">{state.message}</p>}
{state.status === 'success' && <p className="text-green-500">{state.message}</p>}
</form>
);
}
app/components/AddDeviceForm.tsx
'use client';
import { useFormState } from 'react-dom';
import { createDeviceAction } from '@/app/actions/devices';

export function AddDeviceForm() {
const [state, formAction] = useFormState(createDeviceAction, {});

return (
<form action={formAction} className="create-device">
<fieldset>
<label htmlFor="name">Name:</label>
<input type="text" name="name" id="name" placeholder="type something" />
<button type="submit">Submit</button>
</fieldset>
{state.status === 'error' && <p className="text-red-500">{state.message}</p>}
{state.status === 'success' && <p className="text-green-500">{state.message}</p>}
</form>
);
}

Here's that same AddDeviceForm Component running live in this page. It's a real React component, so try submitting it with and without text in the input field. In both cases it's hitting our createDeviceAction function, which is just a simple function that returns a success or error message based on the input:

One nice thing about this is that the Enter key works on your keyboard without any extra code. This is because the form is a real form, and the submit button is a real submit button. The formAction hook is doing the work of intercepting the form submission and calling the server action instead of the default form submission. It feels more like the old school web.

And here's the actual server action that is being called, in a file called app/actions/devices.ts:

app/actions/devices.ts
'use server';

export async function createDeviceAction(prevState: any, formData: FormData) {
const name = formData.get('name');

if (name) {
const device = {
name,
id: Math.round(Math.random() * 10000),
};

return {
status: 'success',
message: `Device '${name}' created with ID: ${device.id}`,
device,
};
} else {
return {
status: 'error',
message: 'Name is required',
};
}
}
app/actions/devices.ts
'use server';

export async function createDeviceAction(prevState: any, formData: FormData) {
const name = formData.get('name');

if (name) {
const device = {
name,
id: Math.round(Math.random() * 10000),
};

return {
status: 'success',
message: `Device '${name}' created with ID: ${device.id}`,
device,
};
} else {
return {
status: 'error',
message: 'Name is required',
};
}
}

The code here is simulating a database mutation and doing some basic validation. This all ought to look pretty familiar. Again, this is the actual copy/pasted code actually running behind the scenes.

How does this work?

We didn't set up any API routes, we didn't write any network request code, and we didn't have to handle any loading states or error handling. There is no code I am not showing you, stitching things together. We just wrote a simple form, and the Server Actions library took care of the rest. It's like magic!

But it's not magic. It's HTTP. If you open up your browser's developer tools and submit the form, you'll see a network request being made to the server, just like with a traditional form submission. The only difference is that the request is being intercepted by the Server Actions library and handled by the createDeviceAction function instead of the default form submission handler. This results in a POST request being sent to the current URL, with the form data and a bunch of other stuff being sent along with it.

Form submission network request
The network request that our form made. The actual data we sent is in the 1_name key

Here's what the response looked like:

Form submission network response
We got our data back, plus some other stuff Next.js sends

Next.js has basically created an API endpoint for us, and then provided its own wrapper calls and data structures on both the request and response cycles, leaving us to focus solely on our UI and business logic.

Visual feedback for slower requests

In many cases, the backend may take a few seconds to process the user's request. It's always a good idea to provide some visual feedback to the user while they are waiting. There's another lovely new React hook called useFormStatus that we can use to show a loading spinner while the request is pending. Here's a slightly modified version of the form that shows gives the user some feedback while the request is being processed:

app/components/AddDeviceFormSlow.tsx
'use client';
import { useFormState, useFormStatus } from 'react-dom';
import { createDeviceActionSlow } from '@/app/actions/devices';

export function AddDeviceFormSlow() {
const [state, formAction] = useFormState(createDeviceActionSlow, {});

return (
<form action={formAction} className="create-device">
<fieldset>
<label htmlFor="name">Name:</label>
<input type="text" name="name" id="name" placeholder="type something" />
<SubmitButton />
</fieldset>
{state.status === 'error' && <p className="text-red-500">{state.message}</p>}
{state.status === 'success' && <p className="text-green-500">{state.message}</p>}
</form>
);
}

//this has to be a separate component because we can't use the useFormStatus hook in the
//same component that has the <form>. Sadface.
function SubmitButton() {
const { pending } = useFormStatus();

return (
<button type="submit" disabled={pending}>
{pending ? 'Submitting...' : 'Submit'}
</button>
);
}
app/components/AddDeviceFormSlow.tsx
'use client';
import { useFormState, useFormStatus } from 'react-dom';
import { createDeviceActionSlow } from '@/app/actions/devices';

export function AddDeviceFormSlow() {
const [state, formAction] = useFormState(createDeviceActionSlow, {});

return (
<form action={formAction} className="create-device">
<fieldset>
<label htmlFor="name">Name:</label>
<input type="text" name="name" id="name" placeholder="type something" />
<SubmitButton />
</fieldset>
{state.status === 'error' && <p className="text-red-500">{state.message}</p>}
{state.status === 'success' && <p className="text-green-500">{state.message}</p>}
</form>
);
}

//this has to be a separate component because we can't use the useFormStatus hook in the
//same component that has the <form>. Sadface.
function SubmitButton() {
const { pending } = useFormStatus();

return (
<button type="submit" disabled={pending}>
{pending ? 'Submitting...' : 'Submit'}
</button>
);
}

This is almost identical to the first example, but I've split the submit button into a separate component and used the useFormStatus hook to show a loading spinner when the request is pending. It's also now pointing at the createDeviceActionSlow function, which is identical to the createDeviceAction function except it has a 3 second delay before returning the response.

Here's the live component - give it a whirl:

That's pretty cool. The useFormStatus hook is doing all the work of tracking the request status and updating the UI accordingly. It's a small thing, but it makes both the user experience and the developer experience a lot better.

What about the API?

It has been the case for quite some time that the greatest value in a web application is often not found in its UI but in its API. The UI is just a way to interact with the API, and the API is where the real work gets done. If your application is genuinely useful to other people, there's a good chance they will want to integrate with it via an API.

There is a school of thought that says your UI should be treated just the same as any other API client for your system. This is a good school, and its teachers are worth listening to. UIs are for humans and APIs are for machines, but there's a lot of overlap in what they want in life:

  • A speedy response
  • To know if their action succeeded, or why it failed
  • To get the data they asked for, in a format they can easily consume

Can't we service them both with the same code? Yes, we can. But it's not always as simple as it seems.

The real world spoils the fun

Way up in that second example snippet, we were making a POST request to /api/devices; our UI code was talking to the exact same API endpoint that any other API user would be talking to. There are many obvious benefits to this, mostly centering around the fact that you don't need to maintain parallel code paths for UI and API users. I've worked on systems that did that, and it can end up doubling your codebase.

Server Actions are great, but they take us away from HTTP and REST, which are bedrock technologies for APIs. It's very easy to spam together a bunch of Server Actions for your UI, and then find yourself in a mess when you need to build an API for someone else to use.

The reality is that although API users and UI users do have a lot in common, they also have differences. In our Server Action examples above we were returning a simple object with a status and a message, but in a real API you would likely want to return a more structured response, with an HTTP status code, headers, and a body. We're also much more likely to need things like rate limiting for our API users, which we didn't have to think about for our UI users.

Consider a super simple POST endpoint in a real API. Assume you're using Prisma and Zod for validation - a fairly common pairing. Here's how you might write that API endpoint:

app/api/devices/route.ts
export async function POST(req: NextRequest) {
try {
const body = await req.json();

const data = {
type: body.type,
hostname: body.hostname,
credentials: body.credentials,
} as Prisma.DeviceCreateInput;

DeviceSchema.parse(data);
const device = prisma.device.create({ data });

return NextResponse.json(device, { status: 201 });
} catch (error) {
if (error instanceof ZodError) {
return NextResponse.json({ error: { issues: error.issues } }, { status: 400 });
}
return NextResponse.json({ error: "Failed to create device" }, { status: 500 });
}
}
app/api/devices/route.ts
export async function POST(req: NextRequest) {
try {
const body = await req.json();

const data = {
type: body.type,
hostname: body.hostname,
credentials: body.credentials,
} as Prisma.DeviceCreateInput;

DeviceSchema.parse(data);
const device = prisma.device.create({ data });

return NextResponse.json(device, { status: 201 });
} catch (error) {
if (error instanceof ZodError) {
return NextResponse.json({ error: { issues: error.issues } }, { status: 400 });
}
return NextResponse.json({ error: "Failed to create device" }, { status: 500 });
}
}

This API endpoint consumes JSON input (assume that auth is handled via middleware), validates it with Zod, and then creates a new device in the database. If the input is invalid, it returns a 400 status code with an error message. If the input looks good but there's an error creating the device, it returns a 500 status code with an error message. If everything goes well, it returns a 201 status code with the newly created device.

Now let's see how we might write a Server Action for the same functionality:

app/actions/devices.ts
'use server';

export async function createDeviceAction(prevState: any, formData: FormData) {
try {
const data = {
type: formData.get("type"),
hostname: formData.get("hostname"),
credentials: formData.get("credentials"),
} as Prisma.DeviceCreateInput;

DeviceSchema.parse(data);
const device = prisma.device.create({ data });

revalidatePath("/devices");

return {
success: true,
message: "Device Created Successfully",
device,
};
} catch (error) {
if (error instanceof ZodError) {
return {
success: false,
message: "Validation Error",
error: {
issues: error.issues,
},
};
}

return {
success: false,
message: "Failed to create device",
error: JSON.stringify(error),
};
}
}
app/actions/devices.ts
'use server';

export async function createDeviceAction(prevState: any, formData: FormData) {
try {
const data = {
type: formData.get("type"),
hostname: formData.get("hostname"),
credentials: formData.get("credentials"),
} as Prisma.DeviceCreateInput;

DeviceSchema.parse(data);
const device = prisma.device.create({ data });

revalidatePath("/devices");

return {
success: true,
message: "Device Created Successfully",
device,
};
} catch (error) {
if (error instanceof ZodError) {
return {
success: false,
message: "Validation Error",
error: {
issues: error.issues,
},
};
}

return {
success: false,
message: "Failed to create device",
error: JSON.stringify(error),
};
}
}

The core of these 2 functions is the same exact 2 lines - one to validate using zod, the other to persist using Prisma. The flow is exactly the same, but in one case we're grabbing JSON, in the other reading form data. In one case we're returning NextResponse objects with HTTP status codes, in the other we're returning objects with success and message keys. The Server Action can also take advantage of nice things like revalidatePath to trigger a revalidation of the page that called it, but we don't want that line in our API endpoint.

Somewhere along the line we will want to show a message to the UI user telling them what happened - hence the message key in the Server Action (the API user can just read the HTTP status code). We could have moved that logic to the UI instead, perhaps returning a statusCode key in the JSON response to emulate an HTTP status code. But that's just reimplementing part of HTTP, and moving the problem to the client, which now has to provide the mapping from a status code to a message. It also means a bigger bundle if we want to support internationalization for those messages.

What this all means is that if you want to take advantage of the UI code cleanliness benefits that come from using Server Actions, and your application conceivably might need an API now or in the future, you need to think about how you are going to avoid duplicating logic between your Server Actions and your API endpoints. This may be a hard problem, and there's no one-size-fits-all solution. Yes you can pull those 2 lines of core logic out into a shared function, but you're still left with a lot of other almost-the-same-but-not-quite code.

Ultimately, it probably just requires another layer of indirection. What that layer looks like will depend on your application, but it's something to think about before you go all-in on Server Actions.

Continue reading

Avoiding Catastrophe by Automating OPNSense Backups

tl;dr: a Backups API exists for OPNSense. opnsense-autobackup uses it to make daily backups for you.

A few months ago I set up OPNSense on my home network, to act as a firewall and router. So far it's been great, with a ton of benefits over the eero mesh system I was replacing - static DHCP assignments, pretty local host names via Unbound DNS, greatly increased visibility and monitoring possibilities, and of course manifold security options.

However, it's also become a victim of its own success. It's now so central to the network that if it were to fail, most of the network would go down with it. The firewall rules, VLAN configurations, DNS setup, DHCP etc are all very useful and very endemic - if they go away most of my network services go down: internet access, home automation, NAS, cameras, more.

OPNSense lets you download a backup via the UI; sometimes I remember to do that before making a sketchy change, but I have once wiped out the box without a recent backup, and ended up spending several hours getting things back up again. That was before really embracing things like local DNS and static DHCP assignments, which I now have a bunch of automation and configuration reliant on.

OPNSense has a built-in way to have backups be automatically created and uploaded to a Google Drive folder. Per the docs it does this on a daily basis, uploading a new backup to Google Drive if something changed. If you want to use Google Drive for your backup storage, this is probably the right option for you, but if you want to customize how this works - either the schedule on which backups are made, or where they're sent, there are ways to do that too.

OPNSense Google Drive backups configuration
Use the built-in Google Drive backup feature if that makes more sense for you

Using the OPNSense API to create backups

OPNSense provides a simple API that allows you to download the current configuration as an XML file. It gives you the same XML file that you get when you click the "Download configuration" button manually in the OPNSense UI. It's worth downloading it manually once and just skimming through the file in your editor - it's nicely organized and interesting to peruse.

Once you've done that, though, you'll probably want to automate the process so you don't have to remember. That's fairly straightforward:

Setting up OPNSense for API backups

We need to set up a way to access the OPNSense backups API, ideally not using our root user - or indeed any user with more access privileges than necessary to create backups. To accomplish this we'll set up a new Group called backups - create the Group via the OPNSense UI, then edit it to assign the Diagnostics: Configuration History privilege. This grants access to the /api/core/backup/ APIs.

OPNSense Assign Backups privilege

Then, create a new User called backup, and add it to the new backups Group. Your Group configuration will end up looking something like this:

OPNSense Add Backups Group

Now that you have a new backup User, which has access only to configuration/backups APIs, you need to generate an API Key and Secret. Do this in the UI (your actual key will be a long random string):

OPNSense Create User Key & Secret

Creating an API Key for the user will automatically initiate a download in your browser of a text file containing 2 lines - the key itself and a secret. This is the one and only time you will be able to gain access to the secret, so save it somewhere. An encrypted version of it will be kept in OPNSense, but you'll never be able to get hold of the non-encrypted version again if you lose it. Here's what the text file will look like:

key=SUPER+TOP+SECRET+KEY
secret=alongstringofrandomlettersandnumbers
key=SUPER+TOP+SECRET+KEY
secret=alongstringofrandomlettersandnumbers

Downloading a backup via the API

Let's test out our new user with a curl command to download the current configuration. The -k tells curl to disregard the fact that OPNSense is likely to respond with an SSL certificate curl doesn't recognize (for your home network you are unlikely to care too much about this). The -u sends our new user's API Key and Secret using HTTP Basic auth:

$ curl -k -u "SUPER+TOP+SECRET+KEY":"alongstringofrandomlettersandnumbers" \
https://firewall.local/api/core/backup/download/this > backup

$ ls -lh
total 120
-rw-r--r-- 1 ed staff 56K May 24 09:33 backup
$ curl -k -u "SUPER+TOP+SECRET+KEY":"alongstringofrandomlettersandnumbers" \
https://firewall.local/api/core/backup/download/this > backup

$ ls -lh
total 120
-rw-r--r-- 1 ed staff 56K May 24 09:33 backup

Cool - we have a 56kb file called backup, which ends up looking something like this:

<?xml version="1.0"?>
<opnsense>
<theme>opnsense</theme>
<sysctl>
<item>
<descr>Increase UFS read-ahead speeds to match the state of hard drives and NCQ.</descr>
<tunable>vfs.read_max</tunable>
<value>default</value>
</item>
<item>
<descr>Set the ephemeral port range to be lower.</descr>
<tunable>net.inet.ip.portrange.first</tunable>
<value>default</value>
</item>
<item>
<descr>Drop packets to closed TCP ports without returning a RST</descr>
<tunable>net.inet.tcp.blackhole</tunable>
<value>default</value>

... 1000 more lines of this ...

</opnsense>
<?xml version="1.0"?>
<opnsense>
<theme>opnsense</theme>
<sysctl>
<item>
<descr>Increase UFS read-ahead speeds to match the state of hard drives and NCQ.</descr>
<tunable>vfs.read_max</tunable>
<value>default</value>
</item>
<item>
<descr>Set the ephemeral port range to be lower.</descr>
<tunable>net.inet.ip.portrange.first</tunable>
<value>default</value>
</item>
<item>
<descr>Drop packets to closed TCP ports without returning a RST</descr>
<tunable>net.inet.tcp.blackhole</tunable>
<value>default</value>

... 1000 more lines of this ...

</opnsense>

In my case I have a couple of thousand lines of this stuff - you may have more or less. Obviously, we wouldn't usually want to do this via a curl command, especially not one that resulted in our access credentials finding their way into our command line history, so let's make this a little bit better.

Automating it all

There are a variety of options here, on 2 main axes:

  • Where to send your backups
  • How often to make a backup

In my case I want to put the file into a git repository, along with other network configuration files. OPNSense does have a built-in way to back up files to a git repo, but I want to be able to put more than just OPNSense config files in this repo, so I went for a more extensible approach.

Daily backups seem reasonable here, as well as the option to create them ad-hoc. Ideally one would just run a single script and a timestamped backup would appear in a backups repo. As I recently set up TrueNAS scale on my local network, this seemed a great place to host a schedulable Docker image, so that's what I did.

The Docker image in question handles downloading the backups and pushing them to a GitHub repository. This approach allows us to easily schedule and manage our backups using TrueNAS SCALE, or anywhere else on the network you can run a docker container. It's published as edspencer/opnsense-autobackup on Docker Hub, and the source code is up at https://github.com/edspencer/opnsense-autobackup.

OPNSense autobackup logo
Behold the generative AI logo. Don't look too closely at the letters

Setting Up the Docker Container on TrueNAS SCALE

Here’s a quick walkthrough on how to set up the Docker container on TrueNAS SCALE and configure it to automate your OPNSense backups.

OPNSense Auto Backup docker image running on TrueNAS Scale
We can afford the 172kb of memory used to run opnsense-autobackup

Prerequisites

  1. Docker Installed on TrueNAS SCALE: Ensure that Docker is installed and running on your TrueNAS SCALE system.
  2. GitHub Repository: Create a GitHub repository to store your backups.
  3. GitHub Personal Access Token: Generate a GitHub personal access token with repo read/write permissions to allow the Docker container to push to your repository.

Generate a GitHub Personal Access Token

  1. Go to GitHub Settings.
  2. Click on Generate new token.
  3. Give your token a descriptive name and give it read and write permissions for your new backups repository
  4. Click Generate token.
  5. Copy the token and save it securely. You will need it to configure the Docker container.

Set Up the Docker Container on TrueNAS SCALE

Navigate to the Apps screen on the TrueNAS Scale instance, then click Discover Apps followed by Custom App. Give your app a name and set it to use the edspencer/opnsense-autobackup docker image, using the latest tag.

You'll need to provide the following environment variables, so configure those now in the Container Environment Variables section:

NameValue
API_KEYyour_opnsense_api_key
API_SECRETyour_opnsense_api_secret
HOSTNAMEfirewall.local
GIT_REPO_URLhttps://github.com/your_username/your_repo.git
GIT_USERNAMEyour_git_username
GIT_EMAILyour_git_email
GIT_TOKENyour_git_token
CRON_SCHEDULE0 0 * * *

Set the CRON_SCHEDULE to anything you like - this one will make it run every day at midnight UTC. Click Install to finish, and you should see the app up and running. So long as you have created your GitHub repo and PAT, you should already see your first backup files in your repo. Depending on what you set for your CRON_SCHEDULE, you'll see new files automatically appearing as long as the image is running.

OPNSense backups in the GitHub repo
A screenshot of my own OPNSense backups repo, with backups populating automatically

And you should see some Docker log output like this:

2024-05-25 09:58:05.362503-07:00CRON_SCHEDULE provided: 0 * * * *. Setting up cron job...
2024-05-25 09:58:07.707058-07:00Starting cron service...
2024-05-25 09:58:07.707137-07:00Starting backup process...
2024-05-25 09:58:07.708367-07:00Cloning the repository...
2024-05-25 09:58:07.710068-07:00Cloning into '/repo'...
2024-05-25 09:58:08.339297-07:00Downloading backup...
2024-05-25 09:58:08.343397-07:00% Total % Received % Xferd Average Speed Time Time Time Current
2024-05-25 09:58:08.343461-07:00Dload Upload Total Spent Left Speed
2024-05-25 09:58:08.379857-07:000 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 57117 100 57117 0 0 1521k 0 --:--:-- --:--:-- --:--:-- 1549k
2024-05-25 09:58:08.381179-07:00Saving backup as latest.xml and opnsense_2024-05-25_16-58.xml...
2024-05-25 09:58:08.391197-07:00[main 7922900] Backups generated 2024-05-25_16-58
2024-05-25 09:58:08.391785-07:001 file changed, 1650 insertions(+)
2024-05-25 09:58:08.391814-07:00create mode 100644 opnsense_2024-05-25_16-58.xml
2024-05-25 09:58:09.087436-07:00To https://github.com/edspencer/opnsense-backups.git
2024-05-25 09:58:09.087476-07:00bce0d8a..7922900 main -> main
2024-05-25 09:58:09.090436-07:00Backup process completed.
2024-05-25 09:58:05.362503-07:00CRON_SCHEDULE provided: 0 * * * *. Setting up cron job...
2024-05-25 09:58:07.707058-07:00Starting cron service...
2024-05-25 09:58:07.707137-07:00Starting backup process...
2024-05-25 09:58:07.708367-07:00Cloning the repository...
2024-05-25 09:58:07.710068-07:00Cloning into '/repo'...
2024-05-25 09:58:08.339297-07:00Downloading backup...
2024-05-25 09:58:08.343397-07:00% Total % Received % Xferd Average Speed Time Time Time Current
2024-05-25 09:58:08.343461-07:00Dload Upload Total Spent Left Speed
2024-05-25 09:58:08.379857-07:000 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 57117 100 57117 0 0 1521k 0 --:--:-- --:--:-- --:--:-- 1549k
2024-05-25 09:58:08.381179-07:00Saving backup as latest.xml and opnsense_2024-05-25_16-58.xml...
2024-05-25 09:58:08.391197-07:00[main 7922900] Backups generated 2024-05-25_16-58
2024-05-25 09:58:08.391785-07:001 file changed, 1650 insertions(+)
2024-05-25 09:58:08.391814-07:00create mode 100644 opnsense_2024-05-25_16-58.xml
2024-05-25 09:58:09.087436-07:00To https://github.com/edspencer/opnsense-backups.git
2024-05-25 09:58:09.087476-07:00bce0d8a..7922900 main -> main
2024-05-25 09:58:09.090436-07:00Backup process completed.

Conclusions and Improvements

I feel much safer knowing that OPNSense is now being continually backed up. There are a bunch of other heavily-configured devices on my network that I would like centralized daily backups for - Home Assistant and my managed switch configs being the obvious ones. More to come on those.

Obviously you could run this anywhere, not just in TrueNAS, but I like the simplicity, observability and resource reuse of using the TrueNAS installation I already set up. So far that's working out well, though it use some monitoring and alerting in case it stops working.

For a detailed guide on setting up the Docker container and automating your backups, visit the GitHub repository. The script that actually gets run is super simple, and easily adaptable to your own needs.

Continue reading

All Posts by Tag