2 posts tagged llm

Introducing Task Demon: Vibe Coding with a Plan

In the last 6 months, the way that leading software engineers build software has undergone a fundamental shift.

The adoption of agentic AI coding assistants has heralded the greatest leap in productivity I have encountered in my 20 year career so far. As I wrote previously, adopting Windsurf doubled my output within a week. Where usually I'd be thrilled to find some way to get 20% more done, and would work hard for that 20%, suddenly I'm getting 100% and it's just... easy.

But if there's a single consistent counter-punch to the Vibe Coding movement, it's the irrefutable fact that no matter how good the agentic AI coding assistant is, it will always do much better work from a detailed prompt that includes a plan, than from your 2 sentence vibe code prompt.

That's what Task Demon does: it takes the 2 sentence vibe prompt and blows it up into a sublimely detailed prompt, usually anywhere between 200 and 1000 lines long, that includes a full implementation plan that will correctly guide the AI to do the right thing, using your project's structure, dependencies and ways of doing things.

A video is worth a million words. This one is 15 minutes but if you use AI to build software, I believe you'll find it worth it:

15 minutes to learn why Task Demon makes Vibe Coding viable for software engineering professionals

How it works

After using Windsurf and later Claude Code for a while, I found that using the following pattern yielded superb results:

Continue reading

Demystifying OpenAI Assistants - Runs, Threads, Messages, Files and Tools

As I mentioned in the previous post, OpenAI dropped a ton of functionality recently, with the shiny new Assistants API taking center stage. In this release, OpenAI introduced the concepts of Threads, Messages, Runs, Files and Tools - all higher-level concepts that make it a little easier to reason about long-running discussions involving multiple human and AI users.

Prior to this, most of what we did with OpenAI's API was call the chat completions API (setting all the non-text modalities aside for now), but to do so we had to keep passing all of the context of the conversation to OpenAI on each API call. This means persisting conversation state on our end, which is fine, but the Assistants API and related functionality makes it easier for developers to get started without reinventing the wheel.

OpenAI Assistants

An OpenAI Assistant is defined as an entity with a name, description, instructions, default model, default tools and default files. It looks like this:

Let's break this down a little. The name and description are self-explanatory - you can change them later via the modify Assistant API, but they're otherwise static from Run to Run. The model and instructions fields should also be familiar to you, but in this case they act as defaults and can be easily overridden for a given Run, as we'll see in a moment.

Continue reading