I'm a big fan of IndyDevDan's YouTube channel. He has greatly expanded my thinking when it comes to LLMs. One of the interesting things he does is write many of his prompts with an XML structure, like this:
<purpose>
You are a world-class expert at creating mermaid charts.
You follow the instructions perfectly to generate mermaid charts.
The user's chart request can be found in the user-input section.
</purpose>
<instructions>
<instruction>Generate valid a mermaid chart based on the user-prompt.</instruction>
<instruction>Use the diagram type specified in the user-prompt.</instruction>
<instruction>Use the examples to understand the structure of the output.</instruction>
</instructions>
<user-input>
State diagram for a traffic light. Still, Moving, Crash.
</user-input>
<examples>
<example>
<user-chart-request>
Build a pie chart that shows the distribution of Apples: 40, Bananas: 35, Oranges: 25.
</user-chart-request>
<chart-response>
pie title Distribution of Fruits
"Apples" : 40
"Bananas" : 35
"Oranges" : 25
</chart-response>
</example>
//... more examples
</examples>
<purpose>
You are a world-class expert at creating mermaid charts.
You follow the instructions perfectly to generate mermaid charts.
The user's chart request can be found in the user-input section.
</purpose>
<instructions>
<instruction>Generate valid a mermaid chart based on the user-prompt.</instruction>
<instruction>Use the diagram type specified in the user-prompt.</instruction>
<instruction>Use the examples to understand the structure of the output.</instruction>
</instructions>
<user-input>
State diagram for a traffic light. Still, Moving, Crash.
</user-input>
<examples>
<example>
<user-chart-request>
Build a pie chart that shows the distribution of Apples: 40, Bananas: 35, Oranges: 25.
</user-chart-request>
<chart-response>
pie title Distribution of Fruits
"Apples" : 40
"Bananas" : 35
"Oranges" : 25
</chart-response>
</example>
//... more examples
</examples>
I really like this structure. Prompt Engineering has been a dark art for a long time. We're suddenly programming using English, which is hilariously imprecise as a programming language, and it feels not quite like "real engineering".
But prompting is actually not programming in English, it's programming in tokens. It just looks like English, so it's easy to fall into the trap of giving it English. But we're not constrained to that at all actually - we can absolutely format our prompts more like XML and reap some considerable rewards:
It's easier for humans to reason about prompts in this format
I've started migrating many of my prompts to this format, and noticed a few things:
It organized my thinking around what data the prompt needs
Many prompts could or should use the same data, but repeat fetching/rendering logic each time
For example, bragdoc.ai basically does 2 things with LLMs: Extracting Achievements from written text, and Generating Documents from Achievements. We can extract Achievements from either a chatbot message or a git commit history, so we have a separate prompt for each, but as you can imagine those prompts have a huge amount in common.
To each we also provide the user's project and company data, as well as any custom instructions from the user. But the Commit Extractor is given a set of commits and repo data, whereas the Text Extractor is fed a message and a chat history. The Document Generator also needs to be fed with a lot of the same data - companies and projects - but is also given a set of Achievements to generate a document from.
The Venn diagram would look something like this:
Because of all of the overlap, I'd extracted a bunch of functions that looked like this:
The above is a slightly trimmed down version of this extract-achievements.mdx file in the prompts directory of the bragdoc-ai repository. First of all, what are we even looking at here? This is MDX, a mature and well-supported mashup of JSX and Markdown. I use it for this very blog site, and it has some nice attributes for writing prompts:
It supports plain, unstructured text
It supports JSX, allowing us to create React components for our prompts
It supports XML, so we can use the XML-style prompt syntax for structured instructions that don't need a full React component
It renders to something like this:
rendered-prompt.xml
<purpose>
You are a careful and attentive assistant who extracts work achievements
from conversations between users and AI assistants. Extract all of the
achievements in the user message contained within the <user-input>
tag. Follow all of the instructions provided below.
</purpose>
<instructions>
<instruction>
Pay special attention to:
- Recent updates or progress reports
- Completed milestones or phases
- Team growth or leadership responsibilities
- Quantitative metrics or impact
- Technical implementations or solutions
</instruction>
<instruction>
Each achievement should have a clear, action-oriented title (REQUIRED) that:
- Starts with an action verb (e.g., Led, Launched, Developed)
- Includes specific metrics when possible (e.g., "40% reduction", "2x improvement")
- Mentions specific systems or teams affected
- Is between 10 and 256 characters
</instruction>
<instruction>
Example good titles:
- "Led Migration of 200+ Services to Cloud Platform"
- "Reduced API Response Time by 40% through Caching"
- "Grew Frontend Team from 5 to 12 Engineers"
</instruction>
<instruction>Do not invent details that the user did not explicitly say.</instruction>
//... more instructions
</instructions>
<input-format title="You are provided with the following inputs:">Hello</input-format>
<variables>
<today>2/3/2025</today>
<user-instructions>If I don't mention a specific project, I'm talking about Brag Doc.</user-instructions>
<chat-history></chat-history>
<companies>
<company>
<id>7972262d-c63d-4c87-b449-24dc634ca152</id>
<name>Egghead Research</name>
<role>Chief Scientist</role>
<start-date>12/31/2022</start-date>
<end-date>Present</end-date>
</company>
<company>
<id>65e274f2-4f5a-4e68-89ca-0fcf9c4898cb</id>
<name>Palo Alto Networks</name>
<role>Principal Engineer</role>
<start-date>1/31/2016</start-date>
<end-date>9/29/2021</end-date>
</company>
</companies>
<projects>
<project>
<id>24ac74b8-dfe6-4fdc-bf34-4a6b9ee22be6</id>
<name>BragDoc.ai</name>
<description>AI-powered self-advocacy tool for tech-savvy individuals.</description>
"title": "Launched AI Analysis Tool with 95% Accuracy at Quantum Nexus",
"summary": "Developed an AI tool for real-time data analysis with 95% accuracy for Quantum Nexus, playing a pivotal role in Project Orion's success.",
"details": "As part of Project Orion at Quantum Nexus, I was responsible for developing a cutting-edge AI tool focused on real-time data analysis. By implementing advanced algorithms and enhancing the training data sets, the tool reached a 95% accuracy rate. This result significantly supported the company's research objectives and has been positively acknowledged by stakeholders for its robust performance and reliability.",
"title": "Launched AI Analysis Tool with 95% Accuracy at Quantum Nexus",
"summary": "Developed an AI tool for real-time data analysis with 95% accuracy for Quantum Nexus, playing a pivotal role in Project Orion's success.",
"details": "As part of Project Orion at Quantum Nexus, I was responsible for developing a cutting-edge AI tool focused on real-time data analysis. By implementing advanced algorithms and enhancing the training data sets, the tool reached a 95% accuracy rate. This result significantly supported the company's research objectives and has been positively acknowledged by stakeholders for its robust performance and reliability.",
The <Purpose>, <Instructions>, and <Variables> tags are all just very basic JSX components exported by the mdx-prompt library. They're part of the standard package of core components that come with mdx-prompt, but they're super basic and it's easy to make your own.
You can imagine how similar the extract-commit-achievements.mdx prompt looks, with a lot of reused components and a tweaked prompt and set of instructions. The generate-document.mdx prompt looks quite similar, with the same companies and projects components being rendered, but also a set of achievements to generate a document from.
Beyond the built-in components, we also used a bunch of our own up there - chiefly the Company and Projects components. They're just normal React components (elements.tsx):
export function Company({ company }: { company: CompanyType }) {
export function Companies({ companies }: { companies: CompanyType[] }) {
return (
<companies>
{companies.map((company) => (
<Company key={company.id} company={company} />
))}
</companies>
);
}
That's what mdx-prompt lets you do. It's a simple library that lets you write your prompts in JSX, and then render them to a string. It works great alongside React.
Benefits
We get a number of benefits from this:
Reuse of components such as <Companies /> and <Projects />
Composability of prompts, where we can easily add or remove sections
JSX syntax highlighting and linting
Familiarity with JSX for React developers
A well-defined set of props required to render the prompt
That last one is important. By creating a JSX prompt in this way, we've forced ourselves to distill down to the essential data that the prompt needs, as expressed in our ExtractAchievementsPromptProps type. Not only does this make it easier to understand what data we need to assemble for the prompt, it also makes it easier run evals against the prompt with mock data:
ExtractAchievementsPromptProps.ts
// It's much easier to reason about what a prompt needs with an interface.
// Much easier to feed test and eval data to as well.
export interface ExtractAchievementsPromptProps {
companies: Company[];
projects: Project[];
message: string;
chatHistory: Message[];
user: User;
}
ExtractAchievementsPromptProps.ts
// It's much easier to reason about what a prompt needs with an interface.
// Much easier to feed test and eval data to as well.
export interface ExtractAchievementsPromptProps {
companies: Company[];
projects: Project[];
message: string;
chatHistory: Message[];
user: User;
}
I didn't want this article to be too long, so I'm publishing 2 other articles at the same time that go deeper into this. The first is mdx-prompt: Real World Example Deep Dive, where we look at a real-world example of mdx-prompt being using in a production open source Next JS application. The second is EDD: Eval-Driven-Design with mdx-prompt, where we look at how to write tests for these prompts.
Downsides / Challenges
I had to swim uphill a little to get this working in all the different places we want it to, chiefly when it comes to rendering:
React / Next JS compatibility quirks
mdx-prompt needs to work in a bunch of different places:
Rendering in API endpoints to power LLM calls
Rendering in CLI functions like npx braintrust eval
I spend most of my time in Next JS and don't have a full mental model of its integration with React. A bunch of times while creating mdx-prompt I ran into problems with incompatible React versions. Some of that was just a bad rollup configuration, but annoying problems abound and this stuff can be a little quirky to get running in all places at once.
I never did find a way to have a React Server Component render one of the prompts to text, which is what I wanted to be able to do in an RSC along with the Bright syntax highlighting library. In the end I used a server endpoint to render the prompt to text and use a client component to fetch that text via an API call, but it's slightly inelegant. On the other hand, it is interesting to have API endpoints that return well-structured LLM prompts. Maybe that will be useful elsewhere.
Some TypeScript chores
Obviously, most of these XML tags that I'm using in my prompts don't exist in the HTML spec, so TypeScript is not happy with you. In Next JS, I've found that you can just declare them as JSX.IntrinsicElements and TypeScript will be happy. In my Next JS app I just created a global.d.ts file like this:
global.d.ts
import React from 'react';
type CustomHTMLProps = React.DetailedHTMLProps<
React.HTMLAttributes<HTMLElement>,
HTMLElement
>;
// Define any custom tags you want to permit for your LLM prompts here
Ok, that's cool and the TypeScript errors go away and everything builds, tests and runs just fine. It's a bit annoying though. The actual CustomTags is a lot longer than that. Maybe there's some better TypeScript or JSX trick that could make this more pleasant.
Note though that you only need to do this for your custom React components - any xml-style tags you use in your .mdx files will just get rendered as text as expected.
Abuse of ReactDOM
Ultimately this is a bit of a hack, and because it's using ReactDOM to render the JSX to a string, it also has edge-case bugs like the one where you use a <title>Some Title</title> tag as part of your JSX prompt only for the <title> to get hoisted to the top of the document because ReactDOM thinks it's rendering an HTML document. There's probably other stuff like that.
Of course, React renders text just fine too, so you can totally just use mdx-prompt without using xml-style tags and still benefit from all the composability and reuse stuff.
We're not importing in the usual way
As I write about in the deep-dive post, we're using JSX/MDX to author and render our prompts. The way we do that at the moment is like this:
export async function render(data: ExtractAchievementsPromptProps): Promise<string> {
return await renderMDXPromptFile({
filePath: promptPath,
data,
components
});
}
The render() function returns the rendered prompt as a string. We're using the renderMDXPromptFile helper from mdx-prompt to do the rendering, which is in turn using ReactDOM to render the JSX to a string. This is part of what allows us to not have to import all (or indeed any) of the components that our .mdx files use - the .mdx prompt files can just
render the <Purpose />, <Instructions />, and other built-in mdx-prompt components, and we pass in all of our app's custom ones defined in elements.tsx.
There are ups and downs associated with this - one issue is that we lose type checking, which I discuss a little more in the deep-dive post. It's also maybe possible that some bundlers could have difficulty seeing the import and leave out the file from the build, though it's worked just fine for me in my Next JS apps.
Integrating into the UI
One of the appealing things about writing prompts with JSX/MDX is that you can just render it like any other React component. This makes it fairly easy to render our prompts into the UI, so that we can iterate on them, feed them different data, etc. It really does beat console.logging prompts to the terminal, where we can't benefit from syntax highlighting and it's easy to just lose things in the noise.
The fact that mdx-prompt is just React is a little deceptive, though, as when we render React components into the browser, we ultimately want the DOM API to turn the tags into DOM elements, whereas we just want our mdx-prompt to be rendered as a string.
I made a page at https://www.bragdoc.ai/prompt that uses Next JS to render prompts in the browser. You can open that page (no account needed to access this page) and see exactly what the rendered prompts used by bragdoc.ai look like:
I tried to do that in an RSC but Next JS really doesn't want you rendering React components into strings in server components. It seems to be a somewhat common issue judging by the GitHub issues of people discussing how to get around it. In the end I just decided to create a simple API to render the prompt to a string, then fetch it via SWR in the browser:
route.ts
import { NextResponse } from 'next/server';
import { render as renderExtractAchievements } from '@/lib/ai/extract-achievements';
import { render as renderExtractCommitAchievements } from '@/lib/ai/extract-commit-achievements';
import { render as renderGenerateDocument } from '@/lib/ai/generate-document';
type Params = Promise<{
id: string;
}>;
import {
companies,
projects,
user,
repository,
commits,
} from '@/lib/ai/prompts/evals/data/user';
import { chatHistory, expectedAchievements as examples } from '@/lib/ai/prompts/evals/data/extract-achievements';
import { existingAchievements } from '@/lib/ai/prompts/evals/data/weekly-document-achievements';
// This is a Server Route, so no "use client" here
userInstructions: 'Always use the title "Weekly Update"'
});
break;
}
// 4) Return the HTML as a text or HTML response
return new NextResponse(prompt, {
status: 200,
headers: {
'Content-Type': 'text/html',
},
});
}
It's not quite as idiomatic as passing props to React components, and I don't love switching over the id in this way as it won't scale particularly well, but it does give us a really easy way to render a nicely formatted prompt to a string, using fake data in this case, but it's also easy to integrate it with the session, your database, or whatever you need.
This is another way that Evals prove their worth - having good data for Evals means by definition that you have good data to render your prompts with. It's trivial to throw together an app/prompts/page.tsx file with contents like the above and a few seconds later see the full prompt rendered in your browser, instead of searching for it in terminal output or logs.
In part 2, we'll go through a complete example of how bragdoc.ai uses mdx-prompt for all of its core LLM capabilities. Then, in part 3 we take a more in-depth look at how easy it is to create Evals with mdx-prompt. In 2025 Evals really are table stakes for any AI app, and they're worth embracing early as they'll almost certainly improve what may well be the most important part of your app.