12 posts tagged ai

I've been exploring some exciting projects at the intersection of AI and web development. Recently, I've been delving into new tools like mdx-prompt, which allows for composable LLM prompts in JSX, highlighted in mdx-prompt: Composable LLM Prompts with JSX. Alongside this, I've been working on bragdoc.ai, a project built in just three weeks, as detailed in How I built bragdoc.ai in 3 weeks.

Additionally, I've been applying AI to enhance user interactions through projects like NarratorAI and ReadNext. If you're interested in AI-driven content recommendations, check out ReadNext: AI Content Recommendations for Node JS. These posts reflect my ongoing commitment to integrating AI seamlessly into web applications, making it a core part of improving developer efficiency and user experience.

Deep Research Yourself

After 2 years of doing my own thing, I recently got the itch to work on something bigger than myself again and earn some money in the process. After talking to a few interesting companies, I was reminded that hiring engineers is really hard, really time consuming and has a large degree of risk attached to it.

When I think about which company makes the most sense for me to join, I picture myself as a jigsaw piece, with a unique blend of skills, experience and personality traits that you could conceivably draw as a pretty complex jigsaw piece. Each company is also a jigsaw, with a bunch of pieces missing. Just as your shape is unlike anyone elses, so each company's gaps are uniquely shaped as well.

As I plan to do full stack engineering for a company that has a strong AI focus, the jigsaw for a company that might be an optimal fit for me could look like this. Each blue piece is a position the company has already filled, with the blank ones being empty positions they are hiring for:

Jigsaw Pieces fitting together
Your unique skills and experience shape your puzzle piece. The better your shape fits with a company's gap, the better fit you are

Imagining myself as the green piece and other candidates for the role as the orange and red, this is a company jigsaw where I would have high alignment, because the shape of my puzzle piece fits with the gap in the company jigsaw without missing areas or overlapping too much.

This is a good company to consider joining, with both company and candidate benefitting from the strong alignment. Our orange and red candidates don't fit so well, or overlap too much, so their ability to create value for the company (and therefore themselves) is lower.

When people research you, what do they see?

Thinking from the hiring company's point of view, it's quite a lot of effort to do the research on a candidate. I honestly don't know if the automated candidate screening tooling is good enough to trust yet, but there are 2 things I do know:

  1. Almost all the information they will gather about you will be from the internet
  2. You don't get to see a copy of what they find out about you
Continue reading

Eval-Driven Design with NextJS and mdx-prompt

In the previous article, we went on a deep dive into how I use mdx-prompt on bragdoc.ai to write clean, composable LLM prompts using good old JSX. In that article as well as the mdx-prompt announcement article, I promised to talk about Evals and their role in helping you architect and build AI apps that you can actually prove work.

Evals are to LLMs what unit tests are to deterministic code. They are an automated measure of the degree to which your code functions correctly. Unit tests are generally pretty easy to reason about, but LLMs are usually deployed to do non-deterministic and somewhat fuzzy things. How do we test functionality like that?

In the last article we looked at the extract-achievements.ts file from bragdoc.ai, which is responsible for extracting structured work achievement data using well-crafted LLM prompts. Here's a reminder of what that Achievement extract process looks like, with its functions to fetch, render and execute the LLM prompts.

Orchestration Diagram for Extracting Achievements from Text
The 3 higher-level functions are just orchestrations of the lower-level functions

When it comes right down to it, when we say we want to test this LLM integration, what we're trying to test is render() plus execute(), or our convenience function renderExecute. This allows us to craft our own ExtractAchievementsPromptProps and validate that we get reasonable-looking ExtractedAchievement objects back.

ExtractAchievementsPromptProps is just a TS interface that describes all the data we need to render the LLM prompt to extract achievements from a chat session. It looks like this:

types.ts
//props required to render the Extract Achievements Prompt
export interface ExtractAchievementsPromptProps {
companies: Company[];
projects: Project[];
message: string;
chatHistory: Message[];
user: User;
}
Continue reading

mdx-prompt: Real World Example Deep Dive

I just released mdx-prompt, which is a simple library that lets you write familiar React JSX to render high quality prompts for LLMs. Read the introductory article for more general info if you didn't already, but the gist is that we can write LLM Prompts with JSX/MDX like this:

extract-commit-achievements.mdx
<Prompt>
<Purpose>
You are a careful and attentive assistant who extracts work achievements
from source control commit messages. Extract all of the achievements in
the commit messages contained within the <user-input> tag. Follow
all of the instructions provided below.
</Purpose>
<Instructions>
<Instruction>Each Achievement should be complete and self-contained.</Instruction>
<Instruction>If multiple related commits form a single logical achievement, combine them.</Instruction>
<Instruction>
Pay special attention to:
1. Code changes and technical improvements
2. Bug fixes and performance optimizations
3. Feature implementations and releases
4. Architecture changes and refactoring
5. Documentation and testing improvements
</Instruction>
</Instructions>
<Variables>
<Companies companies={data.companies} />
<Projects projects={data.projects} />
<today>{new Date().toLocaleDateString()}</today>
<user-instructions>
{data.user?.preferences?.documentInstructions}
</user-instructions>
<UserInput>
{data.commits?.map((c) => <Commit key={c.hash} commit={c} />)}
</UserInput>
<Repo repository={data.repository} />
</Variables>
<Examples
examples={data.expectedAchievements?.map((e) => JSON.stringify(e, null, 4))}
/>
</Prompt>

This ought to look familiar to anyone who's ever seen React code. This project was born of a combination of admiration for the way IndyDevDan and others structure their LLM prompts, and frustration with the string interpolation approaches that everyone takes to generating prompts for LLMs.

In the introductory post I go into some details on why string interpolation-heavy functions are not great for prompts. It's a totally natural thing to want to do - once you've started programming against LLM interfaces, you want to start formalizing the mechanism by which you generate the string that is the prompt. Before long you notice that many of your app's prompts have a lot of overlap, and you start to think about how you can reuse the parts that are the same.

Venn Diagram of Prompt similarities
The Venn Diagram of these 3 prompts used in bragdoc.ai shows a large degree of overlap

Lots of AI-related libraries try to help you here with templating solutions, but they often feel clunky. I really, really wanted to like Langchain, but I lost a day of my life trying to get it to render a prompt that I could have done in 5 minutes with JSX. JSX seems to be a pretty good fit for this problem, and anyone who knows React (a lot of people) can pick it up straight away. mdx-prompt helps React developers compose their LLM prompts with the familiar syntax od JSX.

Continue reading

mdx-prompt: Composable LLM Prompts with JSX

I'm a big fan of IndyDevDan's YouTube channel. He has greatly expanded my thinking when it comes to LLMs. One of the interesting things he does is write many of his prompts with an XML structure, like this:

<purpose>
You are a world-class expert at creating mermaid charts.
You follow the instructions perfectly to generate mermaid charts.
The user's chart request can be found in the user-input section.
</purpose>

<instructions>
<instruction>Generate valid a mermaid chart based on the user-prompt.</instruction>
<instruction>Use the diagram type specified in the user-prompt.</instruction>
<instruction>Use the examples to understand the structure of the output.</instruction>
</instructions>

<user-input>
State diagram for a traffic light. Still, Moving, Crash.
</user-input>

<examples>
<example>
<user-chart-request>
Build a pie chart that shows the distribution of Apples: 40, Bananas: 35, Oranges: 25.
</user-chart-request>
<chart-response>
pie title Distribution of Fruits
"Apples" : 40
"Bananas" : 35
"Oranges" : 25
</chart-response>
</example>
//... more examples
</examples>

I really like this structure. Prompt Engineering has been a dark art for a long time. We're suddenly programming using English, which is hilariously imprecise as a programming language, and it feels not quite like "real engineering".

But prompting is actually not programming in English, it's programming in tokens. It just looks like English, so it's easy to fall into the trap of giving it English. But we're not constrained to that at all actually - we can absolutely format our prompts more like XML and reap some considerable rewards:

  • It's easier for humans to reason about prompts in this format
  • It's easier to reuse content across prompts
  • It's easier to have an LLM generate a prompt in this format (see IndyDevDan's metaprompt video)

We've seen this before

I've started migrating many of my prompts to this format, and noticed a few things:

  • It organized my thinking around what data the prompt needs
  • Many prompts could or should use the same data, but repeat fetching/rendering logic each time
Continue reading

How I built bragdoc.ai in 3 weeks

As we start 2025, it's never been faster to get a SaaS product off the ground. The frameworks, vendors and tools available make it possible to build in weeks what would have taken months or years even just a couple of years ago.

But it's still a lot.

Even when we start from a base template, we still need to figure out our data model, auth, deployment strategy, testing, email sending/receiving, internationalization, mobile support, GDPR, analytics, LLM evals, validation, UX, and a bunch more things:

How I built Bragdoc.ai in 3 weeks
Version 1 of anything is still a lot

This morning I launched bragdoc.ai, an AI tool that tracks the work you do and writes things like weekly updates & performance review documents for you. In previous jobs I would keep an achievements.txt file that theoretically kept track of what I worked on each week so that I could make a good case for myself come review time. Bragdoc scratches my own itch by keeping track of that properly with a chatbot who can also make nice reports for me to share with my manager.

But this article isn't much about bragdoc.ai itself, it's about how a product like it can be built in 3 weeks by a single engineer. The answer is AI tooling, and in particular the Windsurf IDE from Codeium.

In fact, this article could easily have been titled "Use Windsurf or Die". I've been in the fullstack software engineering racket for 20 years, and I've never seen a step-change in productivity like the one heralded by Cursor, Windsurf, Repo Prompt and the like. We're in the first innings of a wave of change in how software is built.

Continue reading

NarratorAI: Trainable AI assistant for Node and React

Every word in every article on this site was, for better or worse, written by me: a real human being. Recently, though, I realized that various pages on the site kinda sucked. Chiefly I'm talking about the Blog home page, tag pages like this one for articles tagged with AI and other places where I could do with some "meta-content".

By meta-content I mean content about content, like the couple of short paragraphs that summarize recent posts for a tag, or the outro text that now appears at the end of each post, along with the automatically generated Read Next recommendations that I added recently using ReadNext.

If you go look at the RSC tag, for example, you'll see a couple of paragraphs that summarize what I've written about regarding React Server Components recently. The list of article excerpts underneath it is a lot more approachable with that high-level summary at the top. Without the intro, the page just feels neglected and incomplete.

But the chances of me remembering to update that intro text every time I write a new post about React Server Components are slim to none. I'll write it once, it'll get out of date, and then it will be about as useful as a chocolate teapot. We need a better way. Ideally one that also lets me play by watching the AI stream automatically generated content before my very eyes:

Narrator AI training in action
This is strangely addictive
Continue reading

ReadNext: AI Content Recommendations for Node JS

Recently I posted about AI Content Recommendations with TypeScript, which concluded by introducing a new NPM package I've been working on called ReadNext. This post is dedicated to ReadNext, and will go into more detail about how to use ReadNext in Node JS, React, and other JavaScript projects.

What it is

ReadNext is a Node JS package that uses AI to generate content recommendations. It's designed to be easy to use, and can be integrated into any Node JS project with just a few lines of code. It is built on top of LangChain, and delegates to an LLM of your choice for summarizing your content to generate recommendations. It runs locally, does not require you to deploy anything, and has broad support for a variety of content types and LLM providers.

ReadNext is not an AI itself, nor does it want your money, your data or your soul. It's just a library that makes it easy to find related content for developers who use JavaScript as their daily driver. It's best used at build time, and can be integrated into your CI/CD pipeline to generate recommendations for your content as part of your build process.

How to use it

Get started in the normal way:

npm install read-next

Configure a ReadNext instance:

import { ReadNext } from 'read-next'

const readNext = await ReadNext.create({
// optional, defaults to a temp directory
cacheDir: '/path/to/cache'
})

Index your content:

await readNext.index({
sourceDocuments: [
{
pageContent: 'This is an article about React Server Components',
id: 'rsc'
},
{
pageContent: 'This is an article about React Hooks',
id: 'hooks'
},
//... as many as you like
]
})
Continue reading

AI Content Recommendations with TypeScript

In the last post, we used TypeScript to create searchable embeddings for a corpus of text content and integrated it into a chat bot. But chat bots are the tomato ketchup of AI - great as an accompaniment to something else, but not satisfying by themselves. Given that we now have the tools to vectorize our documents and perform semantic searches against them, let's extend that to generate content recommendations for our readers.

At the bottom of each of my blog articles are links to other posts that may be interesting to the reader based on the current article. The lo-fi way this was achieved was to find all the other posts which overlapped on one or more tags and pick the most recent one.

Quite often that works ok, but I'm sure you can think of ways it could pick a sub-optimal next article. Someone who knows the content well could probably pick better suggestions at least some of the time. LLMs are really well-suited to tasks like this, and should in theory have several advantages over human editors (such as not forgetting what I wrote last week).

We want to end up with some simple UI like this, with one or more suggestions for what to read next:

Screenshot of a Read Next UI
We want to enable the rendering of a UI like this, showing the most relevant articles to read next

So how do we figure out which content to recommend based on what you're looking at?

Continue reading

Easy RAG for TypeScript and React Apps

This is the first article in a trilogy that will go through the process of extracting content from a large text dataset - my blog in this case - and making it available to an LLM so that users can get answers to their questions without searching through lots of articles along the way.

Part 1 will cover how to process your text documents for easy consumption by an LLM, throw those embeddings into a vector database, and then use that to help answer the user's questions. There are a million articles about this using Python, but I'm principally a TypeScript developer so we'll focus on TS, React and NextJS.

Part 2 covers how to make an AI-driven "What to Read Next" component, which looks at the content of an document (or blog post, in this case) and performs a semantic search through the rest of the content to rank which other posts are most related to this one, and suggest them.

Part 3 will extend this idea by using InformAI to track which articles the user has looked at and attempt to predictively generate suggested content for that user, personalizing the What to Read Next component while keeping the reader completely anonymous to the system.

Let's RAG

About a week ago I released InformAI, which allows you to easily surface the state of your application UI to an LLM in order to help it give more relevant responses to your user. In that intro post I threw InformAI into the blog post itself, which gave me a sort of zero-effort poor man's RAG, as the LLM could see the entire post and allow people to ask questions about it.

That's not really what InformAI is intended for, but it's nice that it works. But what if we want to do this in a more scalable and coherent way? This blog has around 100 articles, often about similar topics. Sometimes, such as when I release open source projects like InformAI, it's one of the only sources of information on the internet about the given topic. You can't ask ChatGPT what InformAI is, but with a couple of tricks we can transparently give ChatGPT access to the answer so that it seems like it magically knows stuff it was never trained on.

Continue reading

Introducing InformAI - Easy & Useful AI for React apps

Most web applications can benefit from AI features, but adding AI to an existing application can be a daunting prospect. Even a moderate-sized React application can have hundreds of components, spread across dozens of pages. Sure, it's easy to tack a chat bot in the bottom corner, but it won't be useful unless you integrate it with your app's contents.

This is where InformAI comes in. InformAI makes it easy to surface all the information that you already have in your React components to an LLM or other AI agent. With a few lines of React code, your LLM can now see exactly what your user sees, without having to train any models, implement RAG, or any other expensive setup.

Inform AI completes the quadrant
LLMs read and write text, Vercel AI SDK can also write UI, but InformAI lets LLMs read UI

InformAI is not an AI itself, it just lets you expose components and UI events via the simple <InformAI /> component. Here's how we might add AI support to a React component that shows a table of a company's firewalls:

<InformAI
name = "Firewalls Table"
prompt = "Shows the user a paginated table of firewalls and their scheduled backup configurations"
props = {{data, page, perPage}}
/>
Continue reading

Demystifying OpenAI Assistants - Runs, Threads, Messages, Files and Tools

As I mentioned in the previous post, OpenAI dropped a ton of functionality recently, with the shiny new Assistants API taking center stage. In this release, OpenAI introduced the concepts of Threads, Messages, Runs, Files and Tools - all higher-level concepts that make it a little easier to reason about long-running discussions involving multiple human and AI users.

Prior to this, most of what we did with OpenAI's API was call the chat completions API (setting all the non-text modalities aside for now), but to do so we had to keep passing all of the context of the conversation to OpenAI on each API call. This means persisting conversation state on our end, which is fine, but the Assistants API and related functionality makes it easier for developers to get started without reinventing the wheel.

OpenAI Assistants

An OpenAI Assistant is defined as an entity with a name, description, instructions, default model, default tools and default files. It looks like this:

Let's break this down a little. The name and description are self-explanatory - you can change them later via the modify Assistant API, but they're otherwise static from Run to Run. The model and instructions fields should also be familiar to you, but in this case they act as defaults and can be easily overridden for a given Run, as we'll see in a moment.

Continue reading

Using ChatGPT to generate ChatGPT Assistants

OpenAI dropped a ton of cool stuff in their Dev Day presentations, including some updates to function calling. There are a few function-call-like things that currently exist within the Open AI ecosystem, so let's take a moment to disambiguate:

  • Plugins: introduced in March 2023, allowed GPT to understand and call your HTTP APIs
  • Actions: an evolution of Plugins, makes it easier but still calls your HTTP APIs
  • Function Calling: Chat GPT understands your functions, tells you how to call them, but does not actually call them

It seems like Plugins are likely to be superseded by Actions, so we end up with 2 ways to have GPT call your functions - Actions for automatically calling HTTP APIs, Function Calling for indirectly calling anything else. We could call this Guided Invocation - despite the name it doesn't actually call the function, it just tells you how to.

That second category of calls is going to include anything that isn't an HTTP endpoint, so gives you a lot of flexibility to call internal APIs that never learned how to speak HTTP. Think legacy systems, private APIs that you don't want to expose to the internet, and other places where this can act as a highly adaptable glue.

Continue reading