NarratorAI: Trainable AI assistant for Node and React

Every word in every article on this site was, for better or worse, written by me: a real human being. Recently, though, I realized that various pages on the site kinda sucked. Chiefly I'm talking about the Blog home page, tag pages like this one for articles tagged with AI and other places where I could do with some "meta-content".

By meta-content I mean content about content, like the couple of short paragraphs that summarize recent posts for a tag, or the outro text that now appears at the end of each post, along with the automatically generated Read Next recommendations that I added recently using ReadNext.

If you go look at the RSC tag, for example, you'll see a couple of paragraphs that summarize what I've written about regarding React Server Components recently. The list of article excerpts underneath it is a lot more approachable with that high-level summary at the top. Without the intro, the page just feels neglected and incomplete.

But the chances of me remembering to update that intro text every time I write a new post about React Server Components are slim to none. I'll write it once, it'll get out of date, and then it will be about as useful as a chocolate teapot. We need a better way. Ideally one that also lets me play by watching the AI stream automatically generated content before my very eyes:

Narrator AI training in action
This is strangely addictive

AI to the rescue

Although I don't want AI generating my actual content, I'm happy to let it generate the meta-content that surrounds it. That's what NarratorAI does. NarratorAI is a pair of NPM packages that create and present AI-generated content that supports your actual content. It's a bit like a content assistant that writes the boring bits for you:

  • narrator-ai: The core package that generates the content
  • @narrator-ai/react: A React library that helps render, regenerate and rate the content

You can use narrator-ai whether you use React or not, but it does go well together. You don't need to use @narrator-ai/react if you don't want to, but it does some nice things for you like letting you easily regenerate and rate the content that narrator-ai generates (see gif above and live demo below).

Live Demo me already

This little thing inside the box below is a demo of NarratorAI in action. It's showing you the "What to Rad Next" text for this very post (scroll down to the bottom of this article to see that). This piece of content was generated by narrator-ai, and I'm using @narrator-ai/react below to render it.

Although I use @narrator-ai/react to render this type of content throughout the site, I only enable the editorial action buttons when I'm developing locally. For this demo below, though, I've enabled the regenerate, thumbs up and thumbs down buttons for you to play with live, as well as chosen a lovely shade of red to make things stand out:

Try clicking these buttons ->

If you enjoyed learning about Narrator AI, you might also appreciate exploring ReadNext: AI Content Recommendations for Node JS for insights into creating AI-driven "What to Read Next" components. Additionally, check out AI Content Recommendations with TypeScript and Introducing InformAI - Easy & Useful AI for React apps to discover more about integrating AI features into your projects.

The regenerate button will stream in a new piece of Markdown content generated by NarratorAI. The thumbs up and thumbs down buttons will rate the content, which will help NarratorAI learn what you like and don't like. You can provide a reason why you did/didn't like the content, and that feedback will be used to improve subsequent generations.

Don't worry, you can't break anything - although it's all hooked up to a real backend, the demo above won't overwrite anything.

How it works

NarratorAI uses a technique called Few Shot Prompting to generate content. This is where we give an LLM a few examples of the type of content we want it to generate, and then ask it to generate more of the same. Equally importantly, we can also give it examples of what we don't want it to generate, and ask it to avoid that.

Few Shot Prompting has a few benefits: it's quick to train, it's easy to understand, and it's portable between different models. At its core it's just a slightly longer prompt - you could fine tune a model to do the same thing, and that's totally reasonable in many cases, but Few-Shotting is way easier (and more portable).

NarratorAI builds on top of the excellent Vercel AI SDK, which means you can configure it to use pretty much any LLM you like. By default, it'll use GPT 4o, so the only configuration you need to do is to set up your OPENAI_API_KEY environment variable (Vercel AI supports a range of AI providers beyond OpenAI).

Generating content

Right now I generate 2 types of content with NarratorAI:

  • Intros for tag pages and the blog home page
  • Outros for the end of each post, telling you about related articles

In order to generate the intros, I need to grab the most recent X articles on tag XYZ and pass them to Narrator along with a prompt telling it what I want it to do. For the outros, I need to do a similar thing, except I need to find the X articles that are the most related to the one I am generating for. That turns out to be pretty easy as I'm already using ReadNext to automatically generate the related articles list.

Because there's a little logic involved in assembling the pieces that I need to send to the LLM for each task, and because I want to be able to generate a given piece of content from either the UI or a CLI, I created a TaskFactory class that does all the heavy lifting for me. Here's a simplified version of how I use it:

TaskFactory.ts
//create our reusable Narrator instance
export const narrator = new Narrator({
outputFilename: (docId) => `${docId}.md`,
outputDir: path.join(process.cwd(), "editorial"),
examplesDir: path.join(process.cwd(), 'editorial', 'examples'),
});

export class TaskFactory {
//returns a GenerationTask for a given docId
jobForId(docId: string): GenerationTask {
const [exampleKey, slug] = docId.split("/");
const { publishedPosts } = this.posts;

if (exampleKey === "post") {
return this.postJob(publishedPosts.find((post) => post.slug === slug));
} else if (exampleKey === "tag") {
return this.tagJob({ tag: slug });
}
}

//returns a GenerationTask for a post outro
postJob(post): GenerationTask {
//summaries of related articles
const relatedArticles = post.related
?.map((slug) => this.posts.publishedPosts.find((p) => p.slug === slug))
.map((post) => ({ post, summary: this.readNext.getSummaryById(post.slug) }));

return {
docId: `post/${post.slug}`,
prompt: postReadNextPrompt(post, this.posts.getContent(post), relatedArticles),
suffix: "Please reply with a 2 sentence suggestion for what the reader should read next.",
};
}

//returns a GenerationTask for a tag intro
tagJob({ tag }): GenerationTask {
//the 10 most recent posts for a given tag
const recentPosts = this.posts.publishedPosts
.filter((post) => post.tags.includes(tag))
.slice(0, 10)
.map((post) => ({ post, summary: this.readNext.getSummaryById(post.slug) }));

return {
docId: `tag/${tag}`,
prompt: tagIntroPrompt(tag, recentPosts),
};
}
}
TaskFactory.ts
//create our reusable Narrator instance
export const narrator = new Narrator({
outputFilename: (docId) => `${docId}.md`,
outputDir: path.join(process.cwd(), "editorial"),
examplesDir: path.join(process.cwd(), 'editorial', 'examples'),
});

export class TaskFactory {
//returns a GenerationTask for a given docId
jobForId(docId: string): GenerationTask {
const [exampleKey, slug] = docId.split("/");
const { publishedPosts } = this.posts;

if (exampleKey === "post") {
return this.postJob(publishedPosts.find((post) => post.slug === slug));
} else if (exampleKey === "tag") {
return this.tagJob({ tag: slug });
}
}

//returns a GenerationTask for a post outro
postJob(post): GenerationTask {
//summaries of related articles
const relatedArticles = post.related
?.map((slug) => this.posts.publishedPosts.find((p) => p.slug === slug))
.map((post) => ({ post, summary: this.readNext.getSummaryById(post.slug) }));

return {
docId: `post/${post.slug}`,
prompt: postReadNextPrompt(post, this.posts.getContent(post), relatedArticles),
suffix: "Please reply with a 2 sentence suggestion for what the reader should read next.",
};
}

//returns a GenerationTask for a tag intro
tagJob({ tag }): GenerationTask {
//the 10 most recent posts for a given tag
const recentPosts = this.posts.publishedPosts
.filter((post) => post.tags.includes(tag))
.slice(0, 10)
.map((post) => ({ post, summary: this.readNext.getSummaryById(post.slug) }));

return {
docId: `tag/${tag}`,
prompt: tagIntroPrompt(tag, recentPosts),
};
}
}

All that does is give me a function called jobForId that I can pass a docId to, and it will return a GenerationTask object that I can pass to the narrator.generate function. The GenerationTask object contains the prompt that I want to send to the LLM, along with a unique docId that identifies the content that I want to generate, and an optional suffix.

Now I can just run a single line of code to generate the intro/outro for any given tag or post:

await narrator.generate(factory.taskForId("tag/ai"));
await narrator.generate(factory.taskForId("tag/ai"));

The one thing I haven't shown you here is the tagIntroPrompt function that TaskFactory refers to. That's just a function that takes a tag and a list of recent posts and returns a prompt that tells the LLM what I want it to generate. Here's a slightly simplified version of that function (the postReadNextPrompt function is similar):

TaskFactory.ts
//it's just a string. A long string, but a string.
function tagIntroPrompt(tag: string, recentPosts: RecentPost[] = []) {
return `
These are summaries of the ${recentPosts.length} most recent posts on my blog for the tag "${tag}".
The summaries have been specifically prepared for you so that you have the context you need to
a very brief 2 paragraph overview of what I've been writing about recently regarding this tag.
Write the editorial in my own tone of voice, as if I were writing it myself.
It should be around 100 words.

*** There's actually more stuff here, but you get the idea ***

Keep it humble and not too high-faluting. I'm a technical blogger, not a poet.

Here are the summaries of the recent blog posts:

${recentPosts.map(({ post, summary }) => articleRenderer(post, summary)).join("\n\n")}
`;
}

//LLM-friendly string for a given post summary
const articleRenderer = (post, summary) => `
ARTICLE METADATA:
Article Title: ${post.title}
Article relative url: ${post.relativeLink}
Tags: ${post.tags.join(", ")}
Published: ${timeAgo.format(new Date(post.date))}
ARTICLE SUMMARY: ${summary}
`;
TaskFactory.ts
//it's just a string. A long string, but a string.
function tagIntroPrompt(tag: string, recentPosts: RecentPost[] = []) {
return `
These are summaries of the ${recentPosts.length} most recent posts on my blog for the tag "${tag}".
The summaries have been specifically prepared for you so that you have the context you need to
a very brief 2 paragraph overview of what I've been writing about recently regarding this tag.
Write the editorial in my own tone of voice, as if I were writing it myself.
It should be around 100 words.

*** There's actually more stuff here, but you get the idea ***

Keep it humble and not too high-faluting. I'm a technical blogger, not a poet.

Here are the summaries of the recent blog posts:

${recentPosts.map(({ post, summary }) => articleRenderer(post, summary)).join("\n\n")}
`;
}

//LLM-friendly string for a given post summary
const articleRenderer = (post, summary) => `
ARTICLE METADATA:
Article Title: ${post.title}
Article relative url: ${post.relativeLink}
Tags: ${post.tags.join(", ")}
Published: ${timeAgo.format(new Date(post.date))}
ARTICLE SUMMARY: ${summary}
`;

It's just returning a string, which is then passed in as the prompt to the generate function. With those pieces in place, generating all ~200 pieces of intro and outro content for the whole site is done with this simple script:

script/generate-narration.ts
//to expose the OPENAI_API_KEY
import * as dotenv from "dotenv";
dotenv.config();

import Posts from "@/lib/blog/Posts";
import { TaskFactory, narrator } from "@/lib/blog/TaskFactory";

async function main() {
const taskFactory = new TaskFactory();
const posts = new Posts();

//generate post "read next" outros
for (const post of posts.publishedPosts) {
await narrator.generate(taskFactory.jobForId(`post/${post.slug}`)!, { save: true });
}

//generate the intro per tag (but only for tags with 3 or more posts)
const tags = posts.getTagsWithCounts().filter(({ count }) => count >= 3);

for (const tag of tags) {
await narrator.generate(taskFactory.jobForId(`tag/${tag.tag}`)!, { save: true });
}

//generate the overall /blog intro
await narrator.generate(taskFactory.jobForId("recent-posts")!);
}

main()
.catch(console.error)
.then(() => process.exit(0));
script/generate-narration.ts
//to expose the OPENAI_API_KEY
import * as dotenv from "dotenv";
dotenv.config();

import Posts from "@/lib/blog/Posts";
import { TaskFactory, narrator } from "@/lib/blog/TaskFactory";

async function main() {
const taskFactory = new TaskFactory();
const posts = new Posts();

//generate post "read next" outros
for (const post of posts.publishedPosts) {
await narrator.generate(taskFactory.jobForId(`post/${post.slug}`)!, { save: true });
}

//generate the intro per tag (but only for tags with 3 or more posts)
const tags = posts.getTagsWithCounts().filter(({ count }) => count >= 3);

for (const tag of tags) {
await narrator.generate(taskFactory.jobForId(`tag/${tag.tag}`)!, { save: true });
}

//generate the overall /blog intro
await narrator.generate(taskFactory.jobForId("recent-posts")!);
}

main()
.catch(console.error)
.then(() => process.exit(0));

There's a bunch more documentation for this over on the GitHub page for NarratorAI.

Training the Narrator for better outcomes

You can write the best prompt in the world, but that doesn't mean the model is going to understand it the same way you do. The best way to improve the quality of the content that Narrator generates is to train it by giving it examples of good and bad generations. You can do this in two ways:

Training with the CLI

It's pretty easy to set up a simple script that will train the Narrator for you. Here's a slightly simplified version of the script I use to train the Narrator for this site:

script/train-narrator.ts
//expose the OPENAI_API_KEY
import * as dotenv from "dotenv";
dotenv.config();

import { TaskFactory, narrator } from "@/lib/blog/TaskFactory";
import Posts from "@/lib/blog/Posts";

async function main() {
const taskFactory = new TaskFactory();
const posts = new Posts();

//iterates over each published post, generating content and asking for my judgment
for (const post of posts.publishedPosts) {
await narrator.train(taskFactory.jobForId("post/" + post.slug));
}
}

main()
.catch(console.error)
.then(() => process.exit(0));
script/train-narrator.ts
//expose the OPENAI_API_KEY
import * as dotenv from "dotenv";
dotenv.config();

import { TaskFactory, narrator } from "@/lib/blog/TaskFactory";
import Posts from "@/lib/blog/Posts";

async function main() {
const taskFactory = new TaskFactory();
const posts = new Posts();

//iterates over each published post, generating content and asking for my judgment
for (const post of posts.publishedPosts) {
await narrator.train(taskFactory.jobForId("post/" + post.slug));
}
}

main()
.catch(console.error)
.then(() => process.exit(0));

This script will iterate over each published post on the site, passing each "What to Read Next" task to Narrator's train function, which will ask me to rate what it generated. I can skip to the next one, but if I give a good/bad rating, that feedback will be used to improve the next generation.

Narrator AI training in action
5 minutes spent training the Narrator will greatly improve the quality of the content it generates

Under the covers, Narrator saves the content, the good/bad verdict and the optional reason you give in a YAML file. Each time the generate function is called, Narrator will select some of the good and bad examples that you've given it and pass them in to the LLM as part of the prompt. That's the Few-Shotting we talked about earlier.

Training with the React component

You've already seen this bit. The live demo above has more than just a regenerate button - it also has thumbs up and down buttons to train Narrator based on your feedback.

These thumbs up/down buttons are rendered by a component in the @narrator-ai/react library, and are connected to a couple of simple React Server Functions on the backend, which hand most of the work off to NarratorAI. That's configured via a React Context Provider that Narrator - ahem - provides you with:

providers/Narrator.tsx
import { createNarrator } from '@narrator-ai/react';
import { regenerateNarration, saveExample } from '../actions/narration';

const Narrator = createNarrator({
actions: {
saveExample,
regenerateNarration,
},
});

export default Narrator;
providers/Narrator.tsx
import { createNarrator } from '@narrator-ai/react';
import { regenerateNarration, saveExample } from '../actions/narration';

const Narrator = createNarrator({
actions: {
saveExample,
regenerateNarration,
},
});

export default Narrator;

The only thing we configure that provider with is an actions object, which accepts saveExample and regenerateNarration functions. Providing these at the top level of the app means that we can place any number of Narration UI elements throughout the app and they'll all transparently support rating and regeneration.

As far as those Server Functions inside actions/narration.ts go, they're just a couple of simple functions that call the NarratorAI backend:

actions/narration.ts
"use server"
import { TaskFactory, narrator } from '@/lib/blog/TaskFactory';
import { createStreamableUI } from 'ai/rsc';
import { MDXRemote } from 'next-mdx-remote/rsc';
import { Spinner } from '@narrator-ai/react';

//called whenever you click a thumbs up or down
export async function saveExample(example) {
return await narrator.saveExample(example);
}

//this is all we have to do to support streaming MDX content,
//but this function could totally just return a string instead if streaming isn't your thing
export async function regenerateNarration(docId: string) {
const editor = await TaskFactory.create();
const ui = createStreamableUI(<Spinner />);

(async () => {
const textStream = await narrator.generate(editor.jobForId(docId), { stream: true, save: true });
let currentContent = '';

for await (const delta of textStream) {
currentContent += delta;
ui.update(<MDXRemote source={currentContent} />);
}

ui.done(<MDXRemote source={currentContent} />);
})();

//Narrator knows how to handle Vercel AI text & UI streams as well as vanilla JS strings
return ui.value;
}
actions/narration.ts
"use server"
import { TaskFactory, narrator } from '@/lib/blog/TaskFactory';
import { createStreamableUI } from 'ai/rsc';
import { MDXRemote } from 'next-mdx-remote/rsc';
import { Spinner } from '@narrator-ai/react';

//called whenever you click a thumbs up or down
export async function saveExample(example) {
return await narrator.saveExample(example);
}

//this is all we have to do to support streaming MDX content,
//but this function could totally just return a string instead if streaming isn't your thing
export async function regenerateNarration(docId: string) {
const editor = await TaskFactory.create();
const ui = createStreamableUI(<Spinner />);

(async () => {
const textStream = await narrator.generate(editor.jobForId(docId), { stream: true, save: true });
let currentContent = '';

for await (const delta of textStream) {
currentContent += delta;
ui.update(<MDXRemote source={currentContent} />);
}

ui.done(<MDXRemote source={currentContent} />);
})();

//Narrator knows how to handle Vercel AI text & UI streams as well as vanilla JS strings
return ui.value;
}

So long as you import the Narrator provider you exported from providers/Narrator.tsx somewhere high up in your app's React component tree you'll be all set. Something like this (though you'll probably have some other stuff in your actual layout):

layout.tsx
import NarratorProvider from "./providers/Narrator";

export default function layout({ children }) {
return <NarratorProvider>{children}</NarratorProvider>;
}
layout.tsx
import NarratorProvider from "./providers/Narrator";

export default function layout({ children }) {
return <NarratorProvider>{children}</NarratorProvider>;
}

Now the final thing to do is to actually render our Narration content in our app. Because I do this in a few different places, I made a simple wrapper component that I can reuse:

NarrationWrapper.tsx
import { Narration } from "@narrator-ai/react";
import NarrationMarkdown from "./NarrationMarkdown";

const sparkleText = "This summary was generated by AI using narrator-ai.<br /> Click to learn more.";

export function NarrationWrapper({
id,
title
}: {
id: string;
title: string;
}) {
return (
<Narration
title={title}
id={id}
sparkleLink="/about/ai"
sparkleText={sparkleText}

//this is what lets me regenerate and rate the content in dev mode only
showActions={process.env.NODE_ENV === "development"}
>
<NarrationMarkdown id={id} />
</Narration>
);
}
NarrationWrapper.tsx
import { Narration } from "@narrator-ai/react";
import NarrationMarkdown from "./NarrationMarkdown";

const sparkleText = "This summary was generated by AI using narrator-ai.<br /> Click to learn more.";

export function NarrationWrapper({
id,
title
}: {
id: string;
title: string;
}) {
return (
<Narration
title={title}
id={id}
sparkleLink="/about/ai"
sparkleText={sparkleText}

//this is what lets me regenerate and rate the content in dev mode only
showActions={process.env.NODE_ENV === "development"}
>
<NarrationMarkdown id={id} />
</Narration>
);
}

Most of the heavy lifting is done by the <Narration> component, which is what gives you the regenerate, thumbs up and thumbs down buttons. Note that it doesn't render the actual content for you - it can't know how you want to render your content so you need to do that yourself. In my case I just have a simple NarrationMarkdown component that uses next-mdx-remote to render the content:

NarrationMarkdown.tsx
"use server";

import { narrator } from "@/lib/blog/TaskFactory";
import { MDXRemote } from "next-mdx-remote/rsc";

async function NarrationMarkdown({ id }) {
const content = narrator.getNarration(id);

if (!content) {
return null;
} else {
return <MDXRemote source={content} />;
}
}

export default NarrationMarkdown;
NarrationMarkdown.tsx
"use server";

import { narrator } from "@/lib/blog/TaskFactory";
import { MDXRemote } from "next-mdx-remote/rsc";

async function NarrationMarkdown({ id }) {
const content = narrator.getNarration(id);

if (!content) {
return null;
} else {
return <MDXRemote source={content} />;
}
}

export default NarrationMarkdown;

And that's it. Now you can throw in as many <NarrationWrapper title="This is cool!" docId="tag/ai" /> components as you like throughout your app, and they'll all support regeneration and rating of the content. Here's precisely how that React snippet turns out, this time showing the intro content for the AI tag:

This is cool!

In recent weeks, I've been focusing on integrating AI into JavaScript environments, especially with React and Node.js. My latest project, NarratorAI, is a trainable AI assistant that leverages the power of modern web frameworks like React and Next.js to enhance user interfaces. It builds on previous works, like ReadNext, an open-source tool for AI-driven content recommendations. Both projects showcase my ongoing commitment to improving content delivery systems through AI.

In addition to these innovations, I've also explored how AI can be harnessed for creating personalized recommendations. Articles like AI Content Recommendations with TypeScript and Easy RAG for TypeScript and React Apps delve into using Retrieval-Augmented Generation (RAG) to enhance user experience. Interested developers can get hands-on with tools such as InformAI, which simplifies AI integration into React apps. These posts collectively illustrate my efforts to make complex AI integrations more accessible and practical in the real world.

Go on, click the regenerate button a few times. You earned it.

Use it in your own project

Anyway, that's it. It's fun and easy to use. There are more docs and examples over on the NarratorAI GitHub page, and you can install it from NPM like this:

npm install narrator-ai @narrator-ai/react
npm install narrator-ai @narrator-ai/react

Godspeed and happy generating!

Share Post:

What to Read Next

If you enjoyed learning about Narrator AI, you might also appreciate exploring ReadNext: AI Content Recommendations for Node JS for insights into creating AI-driven "What to Read Next" components. Additionally, check out AI Content Recommendations with TypeScript and Introducing InformAI - Easy & Useful AI for React apps to discover more about integrating AI features into your projects.