Introducing InformAI - Easy & Useful AI for React apps

Most web applications can benefit from AI features, but adding AI to an existing application can be a daunting prospect. Even a moderate-sized React application can have hundreds of components, spread across dozens of pages. Sure, it's easy to tack a chat bot in the bottom corner, but it won't be useful unless you integrate it with your app's contents.

This is where InformAI comes in. InformAI makes it easy to surface all the information that you already have in your React components to an LLM or other AI agent. With a few lines of React code, your LLM can now see exactly what your user sees, without having to train any models, implement RAG, or any other expensive setup.

Inform AI completes the quadrant
LLMs read and write text, Vercel AI SDK can also write UI, but InformAI lets LLMs read UI

InformAI is not an AI itself, it just lets you expose components and UI events via the simple <InformAI /> component. Here's how we might add AI support to a React component that shows a table of a company's firewalls:

<InformAI
name = "Firewalls Table"
prompt = "Shows the user a paginated table of firewalls and their scheduled backup configurations"
props = {{data, page, perPage}}
/>
<InformAI
name = "Firewalls Table"
prompt = "Shows the user a paginated table of firewalls and their scheduled backup configurations"
props = {{data, page, perPage}}
/>

Under the covers, InformAI keeps track of your component as it renders and re-renders in response to the user. When the user is ready to ask the LLM a question, InformAI automatically wraps up all of that context and renders it into an LLM-friendly data format that gets sent along with the user's message to an LLM backend of your choice.

When your user next sends the AI a question, all of that LLM-friendly component state is sent to the LLM along with the user's input, allowing the LLM to respond based on what the user can see. Because InformAI also supports React Server Components, the LLM can also respond with rendered, specifically-configured components as well as a traditional text reply, allowing conversations like this:

Inform AI Chat Example
It's pretty easy to have a chatbot render React components to answer user questions

Here the user is able to ask the LLM questions, and it's answering based partly on the information exposed to it via InformAI. In the example above, the LLM responded to the second message by returning a <BackupsTable /> component, rendered server-side and streamed to the client just as if it were text. There's a full walkthrough on how to do with alongside the Vercel AI SDK on the InformAI README.

Live Demo

InformAI is easy to add to any React app, which includes this blog, which is a NextJS app that happens to use a lot of React Server Components. Because InformAI works just as well with RSC as it does with traditional client-side React components, I was able to add LLM-awareness to every post on my site by just adding this one <InformAI /> tag:

import { InformAI, InformAIProvider } from 'inform-ai';

//a react server component that renders a single blog post
export async function PostContent({ post }: { post: Post }) {
const postFilePath = pathForPostFile(post);
const source = fs.readFileSync(postFilePath);
const { content } = matter(source);

//in reality, the <InformAIProvider> is in my layout.tsx, but I show it here instead for clarity
return (
<InformAIProvider>
<InformAI
name="Blog Post Content"
props={{ content, post }}
prompt="Shows the blog content to the user. Also gives you the full post metadata"
/>
<MarkdownContent content={content} />
</InformAIProvider>
);
}
import { InformAI, InformAIProvider } from 'inform-ai';

//a react server component that renders a single blog post
export async function PostContent({ post }: { post: Post }) {
const postFilePath = pathForPostFile(post);
const source = fs.readFileSync(postFilePath);
const { content } = matter(source);

//in reality, the <InformAIProvider> is in my layout.tsx, but I show it here instead for clarity
return (
<InformAIProvider>
<InformAI
name="Blog Post Content"
props={{ content, post }}
prompt="Shows the blog content to the user. Also gives you the full post metadata"
/>
<MarkdownContent content={content} />
</InformAIProvider>
);
}

You can pass whatever props you like to InformAI - in this case I'm passing the whole post object as well as the markdown content for the post. InformAI ships with a couple of UI components that make development easier, such as the <CurrentState /> component that you can drop anywhere in your app and see what InformAI sees:

Current InformAI State

No messages yet. Use useInformAI() or add an InformAI component

That's a live component - expand a row to see the name, props (post and content) and prompt that we supplied in our template. Now that we've told InformAI about our component, we can integrate a chat bot like this one and are immediately able to talk to the LLM, knowing that it sees what we see:

Try asking it a question about something in this article (e.g. "what is InformAI?") and you'll get a response from the LLM that incorporates all of the context that all of your InformAI-integrated components have published.

Although InformAI's focus is on collecting intelligence from your components and exposing that to an LLM, it does also ship with a couple of basic UI components to help you get started quickly. These UI components are totally optional and you'll probably want to roll your own at some point, but adding the ChatBot above to your app can be as simple as this:

ChatBot.tsx
"use client";

import { ChatWrapper } from "inform-ai";
import { useActions, useUIState } from "ai/rsc";

export function ChatBot({ className }: { className?: string }) {
const { submitUserMessage } = useActions();
const [messages, setMessages] = useUIState();

return (
<ChatWrapper
className={className}
submitUserMessage={submitUserMessage}
messages={messages}
setMessages={setMessages}
/>
);
}
ChatBot.tsx
"use client";

import { ChatWrapper } from "inform-ai";
import { useActions, useUIState } from "ai/rsc";

export function ChatBot({ className }: { className?: string }) {
const { submitUserMessage } = useActions();
const [messages, setMessages] = useUIState();

return (
<ChatWrapper
className={className}
submitUserMessage={submitUserMessage}
messages={messages}
setMessages={setMessages}
/>
);
}

See the InformAI README for a little more on how to do that, including how to set up the submitUserMessage function.

What it's Good For

InformAI allows you to rapidly and iteratively adopt deep AI integration into your new or existing React applications. Although it's generally just a couple of lines per component, it can provide value to your users even if you don't integrate your entire app with it in one go. Gradual adoption is easy, and it works across server and client-side components.

With a few lines of code the LLM can see everything your user can, including a timeline of events as the user navigates around the app. Imagine a cyber security app that tracks viruses blocked by your firewall - usually there is a lot of noise in the signal there and it can take several steps to investigate some pattern you might see in the data - with InformAI your components can emit events as your user performs their investigation, as well as information about any streamed UI components it sent in response to questions. With all of this context the AI can understand the journey that the user is embarking on, and preemptively render custom UI or fetch data to help complete the task.

InformAI is not a silver bullet: any content that is not on the screen already (or at least accessible to your React components) obviously won't be surfaced to the AI this way. Your app's AI will still benefit from RAG, model fine-tuning and tool provision and InformAI is not a substitute for any of those, but for React developers who may not already be deep in the weeds with LLM fine-tuning, InformAI provides a great bang for the buck in getting started.

Play with it online

I have a little open source nextjs app called LANsaver, which is a simple app that helps back up network devices like firewalls, managed switches and Home Assistant instances in case something dies and you need to restore it. I integrated InformAI into it and you can see an online demo version at https://lansaver.edspencer.net. I left the <CurrentState /> component in place there so you can see what is being exposed to the LLM via InformAI.

LANsaver (see source on github) is pretty basic - all it really knows about are Devices, Backups and Schedules, but hopefully it provides a way to see how easy it is to integrate InformAI into your own applications. LLMs thrive on data, so apps with more meat on the bones than LANsaver will benefit more from InformAI.

Next Steps

InformAI is fairly new but has a decent amount of documentation and automated testing. Its API is relatively small and stable, and it is published as an npm module called inform-ai. Some near-term improvements include providing an easy way to tell the LLM that the user navigated to a new page (e.g. components may no longer be visible), a proper docs site and some more examples.

If you're interested in this stuff and haven't had the chance to check out the Vercel AI SDK yet, I highly recommend you do. Beyond being useful and easy to use, some of the source code is just beautiful (I spent a couple of days just admiring the streaming code recently). It's well worth following Lars Grammel to get updates first-hand as he seems to put out updates pretty much daily.

Share Post:

What to Read Next