Introducing InformAI - Easy & Useful AI for React apps
InformAI lets AI see what your user sees in your React apps
As we start 2025, it's never been faster to get a SaaS product off the ground. The frameworks, vendors and tools available make it possible to build in weeks what would have taken months or years even just a couple of years ago.
But it's still a lot.
Even when we start from a base template, we still need to figure out our data model, auth, deployment strategy, testing, email sending/receiving, internationalization, mobile support, GDPR, analytics, LLM evals, validation, UX, and a bunch more things:
This morning I launched bragdoc.ai, an AI tool that tracks the work you do and writes things like weekly updates & performance review documents for you. In previous jobs I would keep an achievements.txt
file that theoretically kept track of what I worked on each week so that I could make a good case for myself come review time. Bragdoc scratches my own itch by keeping track of that properly with a chatbot who can also make nice reports for me to share with my manager.
But this article isn't much about bragdoc.ai itself, it's about how a product like it can be built in 3 weeks by a single engineer. The answer is AI tooling, and in particular the Windsurf IDE from Codeium.
In fact, this article could easily have been titled "Use Windsurf or Die". I've been in the fullstack software engineering racket for 20 years, and I've never seen a step-change in productivity like the one heralded by Cursor, Windsurf, Repo Prompt and the like. We're in the first innings of a wave of change in how software is built.
Any time I can add 5% to my productivity, I grab it with both hands. Last year I wrote about what I think is the best hardware setup for software engineers. I guesstimate that I get about a 20% productivity bump from that vs a standard 2 monitor setup, or 70% vs just a laptop screen. I spent years and thousands of dollars chasing that 20%. It's a big deal.
And then Windsurf came along, and within a few days my output across a full afternoon of work was consistently double what it would usually have been. I strongly suspect it will double again in the next year or so, then double again after that.
Because the amount we get paid is directly proportional to the value we create for our employers, maximizing our output is the most important thing we can do for our careers. This is as it should be. Embrace these tools or get out-competed by those who do.
This type of tooling is already the greatest productivity booster I've seen in my career, and it hasn't come close to its full potential yet. AI tooling for software engineering is no longer optional.
So let's dive in to how I built bragdoc.ai in 3 weeks. The first ingredient was 20 years of experience doing this sort of thing. That's the hardest part of this equation and the reason why I think senior engineers are set to benefit most from these tools (see Winners and Losers below). But whether you have that or not, the following ingredients are a pretty good place to start:
There are other things in the mix, obviously, but those are the main ingredients. Now let's look at how I use them.
So here are my top tips on how to use the current set of AI software engineering tooling effectively to increase your output.
I'm couching these in terms of using WindSurf, but the same principles apply to Cursor, Repo Prompt, and any other AI tooling that comes along.
Absolutely critical in all this is setting up a system prompt for the LLMs doing work for you. Yifan's YouTube video on this explains it so clearly that I won't try to do better here. The same things he talks about for Cursor apply to equally to Windsurf. My .windsurfrules file for bragdoc is a couple of hundred lines long, with a dozen edits as I iterate on it.
Doing the initial work to learn about that and implement that file took several hours. The second time I do that it'll take 30 minutes. But that time invested is critical in getting the AI tooling to do what you actually want. It means you don't have to keep repeating yourself about how you want things done when you talk to the bot, but can focus on the actual work you want done.
LLMs still have pretty limited context windows, and any application of even moderate complexity will quickly exceed that. When I work on a codebase that I'm familiar with, I have a mental model of most of the stuff in that codebase, but LLMs don't have that benefit so we need to give them a hand.
Being able to point Windsurf at high quality documentation (README.md) and a description of all of the features that already exist (FEATURES.md) can give it that mental model without chewing up a ton of context. The other benefit of having a summarization of your entire codebase is that current generation LLMs start forgetting things when the context is long, so shorter chat sessions tend to produce better results. You can of course use Windsurf itself to keep those files updated too.
When I start working on a new feature - let's say it's support for assigning an impact rating to an Achievement - I start a new chat session inside Windsurf and tell it something like this:
The generated requirements.md will usually miss the mark on a few things so I will typically need to ask Windsurf to fix a few things, but we quickly converge on a detailed document that describes the feature at a level that a junior engineer should have no trouble implementing. In this instance you can see I actually had it create a PLAN.md file as well, which is a more detailed breakdown of the work to be done. Sometimes that can yield better results, especially for complex tasks.
Once we've got that, I'll commit it to a new branch, and ask Windsurf to start implementation. I ask it to keep this file updated as it proceeds with the work, and to keep a log.md file in the same directory that describes the work it's done. This way I can easily start new chat sessions to reset to a short LLM context and therefore better results.
From there it's generally a matter of just iteratively asking Windsurf to proceed with the next part of the implementation. It will take some number of steps before it stops and asks you what to do next - for the impact feature requirements.md I probably had to prompt it a couple of dozen times to get to a complete implementation. At least 50% of the edits it makes result in TypeScript errors, so there is some tedious work fixing those, but it's still enormously faster than typing it all out myself.
The final PR for the impact feature shows that I made 14 commits over about a 90 minute session to get it done, with about 1000 LOC changed. Committing often is highly recommended, as it's easy for Windsurf to do completely the wrong thing and you want to be able to roll back to a known good state.
Several hundred lines of that are the documents that Windsurf creates and uses to keep itself on track. Once the implementation is finished, one could and probably should delete the contents of the features/impact directory, as it's unlikely to be useful in the future, but one nice side benefit here is that we can implement some of the feature, go work on something else, then come back and finish it later as if we'd never left.
I'd love to be able to use GitHub issues, and I'm sure that's the way it will go in the coming months, but for now I use a simple TODO.md to track issues and work that needs to be done. Until these tools get a native ability to CRUD issues in whatever issue tracker you want, this approach seems to work well enough for a single developer. Obviously, it won't scale well.
As with most of the other docs in the repo, a lot of the TODO.md was written by Windsurf itself in response to a prompt. Six months from now this approach will probably make no sense any more.
I have no affiliation with Codeium other than being a happy customer.
All the work I've been doing with Windsurf is stuff that the various parts of it have probably been trained on a lot. TypeScript projects using NextJS and React, a fairly common set of libraries, established API patterns, etc.
Bragdoc currently has about 500k tokens of content, between the code and the documentation. Both of those are really important to Windsurf. Many React applications will have 10x that. At the moment, none of the models available in Windsurf can deal with that much context simultaneously, so it is prone to forgetting things. I'm sure there are already various tricks in place to reduce that, but also it seems likely that models will continue to support larger and larger contexts lengths. It will only get better at this, and it's plausible that it will get close enough to perfect to stop being frustrating.
I highly recommend IndyDevDan's YouTube Channel for insights about how to use these tools but also skate to where the puck is heading. Many of these caveats are going to disappear in the next year or so.
Windsurf is a junior engineer from the seventh dimension, in that it's highly capable at many things, needs guidance in others, and sometimes does things that make no sense whatsoever.
50% of the TypeScript file changes it makes result in type errors, which you then have to go through and fix. Often if you tell it "TS errors in index.ts" it will fix them, but it is frustrating to have to do that. I'm sure that will get fixed before long, and will make a material difference in the output once again.
So after a few weeks of using it, I'm pretty satisfied that Windsurf is generally able to compress a day's work into an afternoon. However, it actually feels both fast and slow at the same time at the moment. I haven't taken actual measurements, so pinch of salt and all that, but it will often take around 10 seconds to make a change to a file, so when it chains 10+ edits together (which is amazing), it can take a minute or so to complete.
This part needs to get 10x faster, and I'm sure it will. At that point I think it will be able to compress work that would have taken a day in 2023 into about an hour.
This is a game-changing technology that is likely to disrupt our entire industry. As is often the case, it will probably benefit some a lot more than others. Putting my old Engineering Manager hat on, if I want to increase the velocity of my team, I don't want to add more junior engineers, I want a faster Windsurf for my senior folks. And I'm sure that's what I'll get before long. I wouldn't want to be a junior engineer in the job market in 2025.
Leetcode's days are numbered for the same reason we don't ask people to do long division by hand in interviews. Companies will no longer be able to perform the mental gymnastics required to ask a candidate to solve a logic problem they will never encounter, without the aid of a tool they will definitely have available and would be foolish not to use. Too many heads will implode, but more importantly too much money is at stake.
Focus will switch towards experience and ability to use those very tools to have high output for the company. Capitalism demands it. There was already a high dynamic range in software engineering - engineers who could create 10x more value than others. These AI tools add another order of magnitude to that range.
That ought to be across the board but it's possible that the old timers won't embrace it as fast as the younger engineers, so there may be an opportunity for some to close that gap a little.
Consider exploring AI Content Recommendations with TypeScript, where you'll find strategies for integrating personalized AI content suggestions into your projects. Additionally, Easy RAG for TypeScript and React Apps offers practical examples to enhance your React applications with AI capabilities using Next.js.
InformAI lets AI see what your user sees in your React apps
Short Dev Loop & excellent automation give teams using Next.js and Vercel an advantage over teams who don't.
My new Narrator AI library writes content around your content so you don't have to - intros, outros and more
RAG is not just for Pythonistas - it's easy and powerful with TypeScript and React too