Recently, I've been focused on integrating AI into React applications, sharing new tools and techniques that make development smoother. For instance, I've launched NarratorAI: Trainable AI assistant for Node and React, a project designed to enhance user interaction in applications. My exploration of content recommendations continues with ReadNext: AI Content Recommendations for Node JS and AI Content Recommendations with TypeScript, where I discuss leveraging AI to improve user experiences.
Additionally, I've been diving into practical uses of React Server Components, with articles like Error handling and retry with React Server Components and Promises across the void: Streaming data with RSC. Each of these posts reflects my ongoing commitment to utilizing open-source technologies for better development practices.
NarratorAI: Trainable AI assistant for Node and React
Every word in every article on this site was, for better or worse, written by me: a real human being. Recently, though, I realized that various pages on the site kinda sucked. Chiefly I'm talking about the Blog home page, tag pages like this one for articles tagged with AI and other places where I could do with some "meta-content".
By meta-content I mean content about content, like the couple of short paragraphs that summarize recent posts for a tag, or the outro text that now appears at the end of each post, along with the automatically generated Read Next recommendations that I added recently using ReadNext.
If you go look at the RSC tag, for example, you'll see a couple of paragraphs that summarize what I've written about regarding React Server Components recently. The list of article excerpts underneath it is a lot more approachable with that high-level summary at the top. Without the intro, the page just feels neglected and incomplete.
But the chances of me remembering to update that intro text every time I write a new post about React Server Components are slim to none. I'll write it once, it'll get out of date, and then it will be about as useful as a chocolate teapot. We need a better way. Ideally one that also lets me play by watching the AI stream automatically generated content before my very eyes:
ReadNext: AI Content Recommendations for Node JS
Recently I posted about AI Content Recommendations with TypeScript, which concluded by introducing a new NPM package I've been working on called ReadNext. This post is dedicated to ReadNext, and will go into more detail about how to use ReadNext in Node JS, React, and other JavaScript projects.
What it is
ReadNext is a Node JS package that uses AI to generate content recommendations. It's designed to be easy to use, and can be integrated into any Node JS project with just a few lines of code. It is built on top of LangChain, and delegates to an LLM of your choice for summarizing your content to generate recommendations. It runs locally, does not require you to deploy anything, and has broad support for a variety of content types and LLM providers.
ReadNext is not an AI itself, nor does it want your money, your data or your soul. It's just a library that makes it easy to find related content for developers who use JavaScript as their daily driver. It's best used at build time, and can be integrated into your CI/CD pipeline to generate recommendations for your content as part of your build process.
How to use it
Get started in the normal way:
Configure a ReadNext instance:
Index your content:
AI Content Recommendations with TypeScript
In the last post, we used TypeScript to create searchable embeddings for a corpus of text content and integrated it into a chat bot. But chat bots are the tomato ketchup of AI - great as an accompaniment to something else, but not satisfying by themselves. Given that we now have the tools to vectorize our documents and perform semantic searches against them, let's extend that to generate content recommendations for our readers.
At the bottom of each of my blog articles are links to other posts that may be interesting to the reader based on the current article. The lo-fi way this was achieved was to find all the other posts which overlapped on one or more tags and pick the most recent one.
Quite often that works ok, but I'm sure you can think of ways it could pick a sub-optimal next article. Someone who knows the content well could probably pick better suggestions at least some of the time. LLMs are really well-suited to tasks like this, and should in theory have several advantages over human editors (such as not forgetting what I wrote last week).
We want to end up with some simple UI like this, with one or more suggestions for what to read next:
So how do we figure out which content to recommend based on what you're looking at?
Blending Markdown and React components in NextJS
Authoring long-form content like blog posts is a pleasant experience with Markdown as it lets you focus on the content without worrying about the presentation or making the browser happy. Spamming <p>
and <div>
tags all over the place is a PITA and serves as a distraction from the content you're working on.
However, in a blog like this one, which deals with a lot of React/node/nextjs content, static text and images are limiting. We really want our React components to be live on the page with all of the richness and composability that React and JSX bring - so how do we blend the best of both of these worlds?
MDX: Markdown plus React
MDX is an extension to Markdown that also allows you to import and use React components. It lets you write content like this:
That's rendering an <Aside>
component, which is a simple React component I use in some of my posts and looks like this:
That's really cool, and we can basically use any React component(s) we like here. But first let's talk a little about metadata.
Introducing InformAI - Easy & Useful AI for React apps
Most web applications can benefit from AI features, but adding AI to an existing application can be a daunting prospect. Even a moderate-sized React application can have hundreds of components, spread across dozens of pages. Sure, it's easy to tack a chat bot in the bottom corner, but it won't be useful unless you integrate it with your app's contents.
This is where InformAI comes in. InformAI makes it easy to surface all the information that you already have in your React components to an LLM or other AI agent. With a few lines of React code, your LLM can now see exactly what your user sees, without having to train any models, implement RAG, or any other expensive setup.
InformAI is not an AI itself, it just lets you expose components and UI events via the simple <InformAI />
component. Here's how we might add AI support to a React component that shows a table of a company's firewalls:
Error handling and retry with React Server Components
React Server Components are a game-changer when it comes to building large web applications without sending megabytes of JavaScript to the client. They allow you to render components on the server and stream them to the client, which can significantly improve the performance of your application.
However, React Server Components can throw errors, just like regular React components. In this article, we'll explore how to handle and recover from errors in React Server Components.
Error boundaries
In React, you can use error boundaries to catch errors that occur during rendering, in lifecycle methods, or in constructors of the whole tree below them. An error boundary is a React component that catches JavaScript errors anywhere in its child component tree and logs those errors, displaying a fallback UI instead of crashing the entire application.
To create an error boundary in React, you need to define a component that implements the componentDidCatch
lifecycle method. This method is called whenever an error occurs in the component tree below the error boundary.
Promises across the void: Streaming data with RSC
Last week we looked at how React Server Component Payloads work under the covers. Towards the end of that article I mentioned a fascinating thing that you can do with RSC: sending unresolved promises from the server to the client. When I first read that I thought it was a documentation bug, but it's actually quite real (though with some limitations).
Here's a simple example of sending a promise from the server to the client. First, here's our server-rendered component, called SuspensePage in this case:
So we just imported a getData()
function that returns a promise that resolves after 1 second. This simulates a call to a database or other asynchronous action. Here's our fake getData()
function:
Loading Fast and Slow: async React Server Components and Suspense
When the web was young, HTML pages were served to clients running web browser software that would turn the HTML text response into rendered pixels on the screen. At first these were static HTML files, but then things like PHP and others came along to allow the server to customize the HTML sent to each client.
CSS came along to change the appearance of what got rendered. JavaScript came along to make the page interactive. Suddenly the page was no longer the atomic unit of the web experience: pages could modify themselves right there inside the browser, without the server being in the loop at all.
This was good because the network is slow and less than 100% reliable. It heralded a new golden age for the web. Progressively, less and less of the HTML content was sent to clients as pre-rendered HTML, and more and more was sent as JSON data that the client would render into HTML using JavaScript.
This all required a lot more work to be done on the client, though, which meant the client had to download a lot more JavaScript. Before long we were shipping MEGABYTES of JavaScript down to the web browser, and we lost the speediness we had gained by not reloading the whole page all the time. Page transitions were fast, but the initial load was slow. Megabytes of code shipped to the browser can multiply into hundreds of megabytes of device memory consumed, and not every device is your state of the art Macbook Pro.
Single Page Applications ultimately do the same thing as that old PHP application did - render a bunch of HTML and pass it to the browser to render. The actual rendered output is often a few kilobytes of plain text HTML, but we downloaded, parsed and executed megabytes of JavaScript to generate those few kilobytes of HTML. What if there was a way we could keep the interactivity of a SPA, but only send the HTML that needs to be rendered to the client?
Avoiding Catastrophe by Automating OPNSense Backups
tl;dr: a Backups API exists for OPNSense. opnsense-autobackup uses it to make daily backups for you.
A few months ago I set up OPNSense on my home network, to act as a firewall and router. So far it's been great, with a ton of benefits over the eero mesh system I was replacing - static DHCP assignments, pretty local host names via Unbound DNS, greatly increased visibility and monitoring possibilities, and of course manifold security options.
However, it's also become a victim of its own success. It's now so central to the network that if it were to fail, most of the network would go down with it. The firewall rules, VLAN configurations, DNS setup, DHCP etc are all very useful and very endemic - if they go away most of my network services go down: internet access, home automation, NAS, cameras, more.
OPNSense lets you download a backup via the UI; sometimes I remember to do that before making a sketchy change, but I have once wiped out the box without a recent backup, and ended up spending several hours getting things back up again. That was before really embracing things like local DNS and static DHCP assignments, which I now have a bunch of automation and configuration reliant on.